2025-07-10 11:53:01,379 - xtesting.ci.run_tests - INFO - Deployment description: +-------------------------+------------------------------------------------------------+ | ENV VAR | VALUE | +-------------------------+------------------------------------------------------------+ | CI_LOOP | daily | | DEBUG | false | | DEPLOY_SCENARIO | k8-nosdn-nofeature-noha | | INSTALLER_TYPE | unknown | | BUILD_TAG | 0GTEQ6V2Z5CU | | NODE_NAME | v1.32 | | TEST_DB_URL | http://testresults.opnfv.org/test/api/v1/results | | TEST_DB_EXT_URL | http://testresults.opnfv.org/test/api/v1/results | | S3_ENDPOINT_URL | https://storage.googleapis.com | | S3_DST_URL | s3://artifacts.opnfv.org/functest- | | | kubernetes/0GTEQ6V2Z5CU/functest-kubernetes-opnfv- | | | functest-kubernetes-cnf-v1.32-cnf_testsuite-run-23 | | HTTP_DST_URL | http://artifacts.opnfv.org/functest- | | | kubernetes/0GTEQ6V2Z5CU/functest-kubernetes-opnfv- | | | functest-kubernetes-cnf-v1.32-cnf_testsuite-run-23 | +-------------------------+------------------------------------------------------------+ 2025-07-10 11:53:01,392 - xtesting.ci.run_tests - INFO - Loading test case 'cnf_testsuite'... 2025-07-10 11:53:01,745 - xtesting.ci.run_tests - INFO - Running test case 'cnf_testsuite'... 2025-07-10 11:53:12,407 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite setup -l debug CNF TestSuite version: v1.4.4 Successfully created directories for cnf-testsuite [2025-07-10 11:53:01] INFO -- CNTI: VERSION: v1.4.4 [2025-07-10 11:53:01] INFO -- CNTI-Setup.cnf_directory_setup: Creating directories for CNTI testsuite [2025-07-10 11:53:01] DEBUG -- CNTI: helm_local_install [2025-07-10 11:53:01] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-07-10 11:53:01] INFO -- CNTI: Globally installed helm satisfies required version. Skipping local helm install. Global helm found. Version: v3.17.0 [2025-07-10 11:53:01] DEBUG -- CNTI: helm_v2?: [2025-07-10 11:53:01] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-07-10 11:53:01] DEBUG -- CNTI-Helm.helm_local_response.cmd: command: /home/xtesting/.cnf-testsuite/tools/helm/linux-amd64/helm version No Local helm version found [2025-07-10 11:53:01] WARN -- CNTI-Helm.helm_local_response.cmd: stderr: sh: line 0: /home/xtesting/.cnf-testsuite/tools/helm/linux-amd64/helm: not found [2025-07-10 11:53:01] DEBUG -- CNTI: helm_v2?: [2025-07-10 11:53:01] DEBUG -- CNTI: helm_v3?: [2025-07-10 11:53:01] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-07-10 11:53:01] DEBUG -- CNTI-Helm.helm_gives_k8s_warning?.cmd: command: helm list Global kubectl found. Version: 1.32 No Local kubectl version found Global git found. Version: 2.45.3 No Local git version found All prerequisites found. KUBECONFIG is set as /home/xtesting/.kube/config. [2025-07-10 11:53:02] INFO -- CNTI-Setup.create_namespace: Creating namespace for CNTI testsuite [2025-07-10 11:53:02] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-07-10 11:53:02] INFO -- CNTI-KubectlClient.Apply.namespace: Create a namespace: cnf-testsuite Created cnf-testsuite namespace on the Kubernetes cluster [2025-07-10 11:53:02] INFO -- CNTI-Setup.create_namespace: cnf-testsuite namespace created [2025-07-10 11:53:02] INFO -- CNTI-KubectlClient.Utils.label: Label namespace/cnf-testsuite with pod-security.kubernetes.io/enforce=privileged [2025-07-10 11:53:02] INFO -- CNTI-Setup.configuration_file_setup: Creating configuration file [2025-07-10 11:53:02] DEBUG -- CNTI: install_apisnoop [2025-07-10 11:53:02] INFO -- CNTI: GitClient.clone command: https://github.com/cncf/apisnoop /home/xtesting/.cnf-testsuite/tools/apisnoop [2025-07-10 11:53:09] INFO -- CNTI: GitClient.clone output: [2025-07-10 11:53:09] INFO -- CNTI: GitClient.clone stderr: Cloning into '/home/xtesting/.cnf-testsuite/tools/apisnoop'... [2025-07-10 11:53:09] INFO -- CNTI: url: https://github.com/vmware-tanzu/sonobuoy/releases/download/v0.56.14/sonobuoy_0.56.14_linux_amd64.tar.gz [2025-07-10 11:53:09] INFO -- CNTI: write_file: /home/xtesting/.cnf-testsuite/tools/sonobuoy/sonobuoy.tar.gz [2025-07-10 11:53:09] DEBUG -- CNTI-http.client: Performing request [2025-07-10 11:53:09] DEBUG -- CNTI-http.client: Performing request [2025-07-10 11:53:11] DEBUG -- CNTI: Sonobuoy Version: v0.56.14 MinimumKubeVersion: 1.17.0 MaximumKubeVersion: 1.99.99 GitSHA: bd5465d6b2b2b92b517f4c6074008d22338ff509 GoVersion: go1.19.4 Platform: linux/amd64 API Version check skipped due to missing `--kubeconfig` or other error [2025-07-10 11:53:11] INFO -- CNTI: install_kind [2025-07-10 11:53:11] INFO -- CNTI: write_file: /home/xtesting/.cnf-testsuite/tools/kind/kind [2025-07-10 11:53:11] INFO -- CNTI: install kind [2025-07-10 11:53:11] INFO -- CNTI: url: https://github.com/kubernetes-sigs/kind/releases/download/v0.27.0/kind-linux-amd64 [2025-07-10 11:53:11] DEBUG -- CNTI-http.client: Performing request [2025-07-10 11:53:11] DEBUG -- CNTI-http.client: Performing request Dependency installation complete 2025-07-10 11:53:32,070 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite cnf_install cnf-config=example-cnfs/coredns/cnf-testsuite.yml -l debug Successfully created directories for cnf-testsuite [2025-07-10 11:53:12] INFO -- CNTI-Setup.cnf_directory_setup: Creating directories for CNTI testsuite [2025-07-10 11:53:12] DEBUG -- CNTI: helm_local_install KUBECONFIG is set as /home/xtesting/.kube/config. [2025-07-10 11:53:12] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-07-10 11:53:12] INFO -- CNTI: Globally installed helm satisfies required version. Skipping local helm install. [2025-07-10 11:53:12] INFO -- CNTI-Setup.create_namespace: Creating namespace for CNTI testsuite [2025-07-10 11:53:12] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-07-10 11:53:12] INFO -- CNTI-KubectlClient.Apply.namespace: Create a namespace: cnf-testsuite cnf-testsuite namespace already exists on the Kubernetes cluster [2025-07-10 11:53:12] WARN -- CNTI-KubectlClient.Apply.namespace.cmd: stderr: Error from server (AlreadyExists): namespaces "cnf-testsuite" already exists [2025-07-10 11:53:12] INFO -- CNTI-Setup.create_namespace: cnf-testsuite namespace already exists, not creating [2025-07-10 11:53:12] INFO -- CNTI-KubectlClient.Utils.label: Label namespace/cnf-testsuite with pod-security.kubernetes.io/enforce=privileged [2025-07-10 11:53:12] INFO -- CNTI-Setup.cnf_install: Installing CNF to cluster [2025-07-10 11:53:12] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:53:12] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:53:12] DEBUG -- CNTI: find output: [2025-07-10 11:53:12] WARN -- CNTI: find stderr: find: installed_cnf_files/*: No such file or directory [2025-07-10 11:53:12] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: [] [2025-07-10 11:53:12] INFO -- CNTI: ClusterTools install [2025-07-10 11:53:12] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource namespaces [2025-07-10 11:53:12] DEBUG -- CNTI: ClusterTools ensure_namespace_exists namespace_array: [{"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-07-10T11:53:02Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-testsuite", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-testsuite", "resourceVersion" => "4607738", "uid" => "b2fa8847-c70b-45c1-b28a-8795d685c60d"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "default"}, "name" => "default", "resourceVersion" => "20", "uid" => "e9c1e555-d421-479b-99da-e15b9e4cbe23"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-07-10T11:45:36Z", "deletionTimestamp" => "2025-07-10T11:52:57Z", "generateName" => "ims-", "labels" => {"kubernetes.io/metadata.name" => "ims-kqwq6", "pod-security.kubernetes.io/enforce" => "baseline"}, "name" => "ims-kqwq6", "resourceVersion" => "4607884", "uid" => "4a263985-e7ee-4fff-b1ab-5b53cad90c91"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"conditions" => [{"lastTransitionTime" => "2025-07-10T11:53:03Z", "message" => "All resources successfully discovered", "reason" => "ResourcesDiscovered", "status" => "False", "type" => "NamespaceDeletionDiscoveryFailure"}, {"lastTransitionTime" => "2025-07-10T11:53:03Z", "message" => "All legacy kube types successfully parsed", "reason" => "ParsedGroupVersions", "status" => "False", "type" => "NamespaceDeletionGroupVersionParsingFailure"}, {"lastTransitionTime" => "2025-07-10T11:53:03Z", "message" => "All content successfully deleted, may be waiting on finalization", "reason" => "ContentDeleted", "status" => "False", "type" => "NamespaceDeletionContentFailure"}, {"lastTransitionTime" => "2025-07-10T11:53:03Z", "message" => "Some resources are remaining: pods. has 6 resource instances", "reason" => "SomeResourcesRemain", "status" => "True", "type" => "NamespaceContentRemaining"}, {"lastTransitionTime" => "2025-07-10T11:53:03Z", "message" => "All content-preserving finalizers finished", "reason" => "ContentHasNoFinalizers", "status" => "False", "type" => "NamespaceFinalizersRemaining"}], "phase" => "Terminating"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "kube-node-lease"}, "name" => "kube-node-lease", "resourceVersion" => "27", "uid" => "142e6851-3c72-4ad7-80c0-9a06f7ec29a7"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "kube-public"}, "name" => "kube-public", "resourceVersion" => "12", "uid" => "6d5c8018-e87c-4144-ad35-175aba623785"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "kube-system"}, "name" => "kube-system", "resourceVersion" => "5", "uid" => "3623b17d-eebf-47da-aa4d-8431d7e16dcc"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"local-path-storage\"}}\n"}, "creationTimestamp" => "2025-06-10T13:24:34Z", "labels" => {"kubernetes.io/metadata.name" => "local-path-storage"}, "name" => "local-path-storage", "resourceVersion" => "323", "uid" => "5e7cc0e6-da8e-4014-a795-6c248d992a2a"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}] [2025-07-10 11:53:12] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file cluster_tools.yml [2025-07-10 11:53:13] WARN -- CNTI-KubectlClient.Apply.file.cmd: stderr: Warning: would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), privileged (container "cluster-tools" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "cluster-tools" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "cluster-tools" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "proc", "systemd", "hostfs" use restricted volume type "hostPath"), runAsNonRoot != true (pod or container "cluster-tools" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-tools" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") [2025-07-10 11:53:13] INFO -- CNTI: ClusterTools wait_for_cluster_tools [2025-07-10 11:53:13] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource namespaces [2025-07-10 11:53:13] DEBUG -- CNTI: ClusterTools ensure_namespace_exists namespace_array: [{"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-07-10T11:53:02Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-testsuite", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-testsuite", "resourceVersion" => "4607738", "uid" => "b2fa8847-c70b-45c1-b28a-8795d685c60d"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "default"}, "name" => "default", "resourceVersion" => "20", "uid" => "e9c1e555-d421-479b-99da-e15b9e4cbe23"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-07-10T11:45:36Z", "deletionTimestamp" => "2025-07-10T11:52:57Z", "generateName" => "ims-", "labels" => {"kubernetes.io/metadata.name" => "ims-kqwq6", "pod-security.kubernetes.io/enforce" => "baseline"}, "name" => "ims-kqwq6", "resourceVersion" => "4607884", "uid" => "4a263985-e7ee-4fff-b1ab-5b53cad90c91"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"conditions" => [{"lastTransitionTime" => "2025-07-10T11:53:03Z", "message" => "All resources successfully discovered", "reason" => "ResourcesDiscovered", "status" => "False", "type" => "NamespaceDeletionDiscoveryFailure"}, {"lastTransitionTime" => "2025-07-10T11:53:03Z", "message" => "All legacy kube types successfully parsed", "reason" => "ParsedGroupVersions", "status" => "False", "type" => "NamespaceDeletionGroupVersionParsingFailure"}, {"lastTransitionTime" => "2025-07-10T11:53:03Z", "message" => "All content successfully deleted, may be waiting on finalization", "reason" => "ContentDeleted", "status" => "False", "type" => "NamespaceDeletionContentFailure"}, {"lastTransitionTime" => "2025-07-10T11:53:03Z", "message" => "Some resources are remaining: pods. has 6 resource instances", "reason" => "SomeResourcesRemain", "status" => "True", "type" => "NamespaceContentRemaining"}, {"lastTransitionTime" => "2025-07-10T11:53:03Z", "message" => "All content-preserving finalizers finished", "reason" => "ContentHasNoFinalizers", "status" => "False", "type" => "NamespaceFinalizersRemaining"}], "phase" => "Terminating"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "kube-node-lease"}, "name" => "kube-node-lease", "resourceVersion" => "27", "uid" => "142e6851-3c72-4ad7-80c0-9a06f7ec29a7"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "kube-public"}, "name" => "kube-public", "resourceVersion" => "12", "uid" => "6d5c8018-e87c-4144-ad35-175aba623785"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "kube-system"}, "name" => "kube-system", "resourceVersion" => "5", "uid" => "3623b17d-eebf-47da-aa4d-8431d7e16dcc"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"local-path-storage\"}}\n"}, "creationTimestamp" => "2025-06-10T13:24:34Z", "labels" => {"kubernetes.io/metadata.name" => "local-path-storage"}, "name" => "local-path-storage", "resourceVersion" => "323", "uid" => "5e7cc0e6-da8e-4014-a795-6c248d992a2a"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}] [2025-07-10 11:53:13] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Waiting for resource Daemonset/cluster-tools to install [2025-07-10 11:53:13] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-07-10 11:53:13] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-07-10 11:53:13] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-07-10 11:53:13] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: seconds elapsed while waiting: 0 [2025-07-10 11:53:14] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-07-10 11:53:14] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-07-10 11:53:14] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-07-10 11:53:15] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-07-10 11:53:15] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-07-10 11:53:15] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools ClusterTools installed CNF installation start. Installing deployment "coredns". [2025-07-10 11:53:15] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Daemonset/cluster-tools is ready [2025-07-10 11:53:15] DEBUG -- CNTI-CNFInstall.parsed_cli_args: Parsed args: {config_path: "example-cnfs/coredns/cnf-testsuite.yml", timeout: 1800, skip_wait_for_install: false} [2025-07-10 11:53:15] INFO -- CNTI-Helm.helm_repo_add: Adding helm repository: stable [2025-07-10 11:53:15] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-07-10 11:53:15] DEBUG -- CNTI-Helm.helm_repo_add.cmd: command: helm repo add stable https://cncf.gitlab.io/stable [2025-07-10 11:53:16] INFO -- CNTI-Helm.pull: Pulling helm chart: stable/coredns [2025-07-10 11:53:16] DEBUG -- CNTI-Helm.pull.cmd: command: helm pull stable/coredns --untar --destination installed_cnf_files/deployments/coredns [2025-07-10 11:53:16] INFO -- CNTI-CNFManager.ensure_namespace_exists!: Ensure that namespace: cnf-default exists on the cluster for the CNF install [2025-07-10 11:53:16] INFO -- CNTI-KubectlClient.Apply.namespace: Create a namespace: cnf-default [2025-07-10 11:53:16] INFO -- CNTI-KubectlClient.Utils.label: Label namespace/cnf-default with pod-security.kubernetes.io/enforce=privileged [2025-07-10 11:53:16] INFO -- CNTI-Helm.install: Installing helm chart: installed_cnf_files/deployments/coredns/coredns [2025-07-10 11:53:16] DEBUG -- CNTI-Helm.install: Values: [2025-07-10 11:53:16] DEBUG -- CNTI-Helm.install.cmd: command: helm install coredns installed_cnf_files/deployments/coredns/coredns -n cnf-default [2025-07-10 11:53:17] WARN -- CNTI-Helm.install.cmd: stderr: W0710 11:53:17.216101 477 warnings.go:70] spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead W0710 11:53:17.216190 477 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "coredns" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "coredns" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "coredns" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "coredns" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") [2025-07-10 11:53:17] INFO -- CNTI-Helm.generate_manifest: Generating manifest from installed CNF: coredns [2025-07-10 11:53:17] DEBUG -- CNTI-Helm.cmd: command: helm get manifest coredns --namespace cnf-default [2025-07-10 11:53:17] INFO -- CNTI-Helm.generate_manifest: Manifest was generated successfully [2025-07-10 11:53:17] INFO -- CNTI-CNFInstall.add_namespace_to_resources: Updating metadata.namespace field for resources in generated manifest Waiting for resource for "coredns" deployment (1/1): [Deployment] coredns-coredns [2025-07-10 11:53:17] DEBUG -- CNTI-CNFInstall.add_namespace_to_resources: Added cnf-default namespace for resource: {kind: ConfigMap, name: coredns-coredns} [2025-07-10 11:53:17] DEBUG -- CNTI-CNFInstall.add_namespace_to_resources: Added cnf-default namespace for resource: {kind: Service, name: coredns-coredns} [2025-07-10 11:53:17] DEBUG -- CNTI-CNFInstall.add_namespace_to_resources: Added cnf-default namespace for resource: {kind: Deployment, name: coredns-coredns} [2025-07-10 11:53:17] DEBUG -- CNTI-CNFInstall.add_manifest_to_file: coredns manifest was appended into installed_cnf_files/deployments/coredns/deployment_manifest.yml file [2025-07-10 11:53:17] DEBUG -- CNTI-CNFInstall.add_manifest_to_file: coredns manifest was appended into installed_cnf_files/common_manifest.yml file [2025-07-10 11:53:17] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "ConfigMap", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "ClusterRole", name: "coredns-coredns", namespace: "default"}, {kind: "ClusterRoleBinding", name: "coredns-coredns", namespace: "default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:53:17] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Waiting for resource Deployment/coredns-coredns to install [2025-07-10 11:53:17] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:17] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:17] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:17] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: seconds elapsed while waiting: 0 [2025-07-10 11:53:18] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:18] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:18] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:19] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:19] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:19] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:20] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:20] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:20] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:21] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:21] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:21] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:23] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:23] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:23] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:24] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:24] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:24] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:25] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:25] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:26] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:26] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:27] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:27] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:28] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:28] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:28] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:28] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: seconds elapsed while waiting: 10 [2025-07-10 11:53:29] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:29] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:29] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:30] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:30] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:30] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:31] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-07-10 11:53:31] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-07-10 11:53:31] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns All "coredns" deployment resources are up. CNF installation complete. [2025-07-10 11:53:32] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Deployment/coredns-coredns is ready [2025-07-10 11:53:32] INFO -- CNTI-Setup.cnf_install: CNF installed successfuly 2025-07-10 11:58:36,974 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite cert -l debug CNF TestSuite version: v1.4.4 Compatibility, Installability & Upgradability Tests [2025-07-10 11:53:32] INFO -- CNTI: VERSION: v1.4.4 [2025-07-10 11:53:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-07-10 11:53:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:53:32] DEBUG -- CNTI-CNFManager.Points.Results.file: Results file created: results/cnf-testsuite-results-20250710-115332-086.yml [2025-07-10 11:53:32] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:53:32] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:53:32] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:53:32] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:53:32] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:53:32] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:53:32] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:53:32] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:53:32] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [increase_decrease_capacity] [2025-07-10 11:53:32] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:53:32] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:53:32] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:53:32] INFO -- CNTI-CNFManager.Task.task_runner.increase_decrease_capacity: Starting test [2025-07-10 11:53:32] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:53:32] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:32] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:53:32] INFO -- CNTI-change_capacity:resource: Deployment/coredns-coredns; namespace: cnf-default [2025-07-10 11:53:32] INFO -- CNTI-change_capacity:capacity: Base replicas: 1; Target replicas: 3 [2025-07-10 11:53:32] INFO -- CNTI-KubectlClient.Utils.scale: Scale Deployment/coredns-coredns to 1 replicas [2025-07-10 11:53:32] DEBUG -- CNTI: target_replica_count: 1 [2025-07-10 11:53:32] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-07-10 11:53:32] DEBUG -- CNTI: Deployment initialized to 1 [2025-07-10 11:53:32] INFO -- CNTI-KubectlClient.Utils.scale: Scale Deployment/coredns-coredns to 3 replicas [2025-07-10 11:53:32] WARN -- CNTI-KubectlClient.Utils.scale.cmd: stderr: Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead [2025-07-10 11:53:32] DEBUG -- CNTI: target_replica_count: 3 [2025-07-10 11:53:32] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-07-10 11:53:34] DEBUG -- CNTI: Time left: 58 seconds [2025-07-10 11:53:34] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-07-10 11:53:36] DEBUG -- CNTI: Time left: 56 seconds [2025-07-10 11:53:36] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-07-10 11:53:38] DEBUG -- CNTI: Time left: 54 seconds [2025-07-10 11:53:38] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-07-10 11:53:41] DEBUG -- CNTI: Time left: 52 seconds [2025-07-10 11:53:41] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-07-10 11:53:43] DEBUG -- CNTI: Time left: 50 seconds [2025-07-10 11:53:43] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-07-10 11:53:45] DEBUG -- CNTI: Time left: 48 seconds [2025-07-10 11:53:45] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-07-10 11:53:47] DEBUG -- CNTI: Time left: 46 seconds [2025-07-10 11:53:47] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-07-10 11:53:49] DEBUG -- CNTI: Time left: 58 seconds [2025-07-10 11:53:49] DEBUG -- CNTI: current_replicas before get Deployment: 3 [2025-07-10 11:53:49] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:53:49] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:49] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:53:49] INFO -- CNTI-change_capacity:resource: Deployment/coredns-coredns; namespace: cnf-default [2025-07-10 11:53:49] INFO -- CNTI-change_capacity:capacity: Base replicas: 3; Target replicas: 1 [2025-07-10 11:53:49] INFO -- CNTI-KubectlClient.Utils.scale: Scale Deployment/coredns-coredns to 3 replicas [2025-07-10 11:53:49] DEBUG -- CNTI: target_replica_count: 3 [2025-07-10 11:53:49] DEBUG -- CNTI: current_replicas before get Deployment: 3 [2025-07-10 11:53:49] DEBUG -- CNTI: Deployment initialized to 3 [2025-07-10 11:53:49] INFO -- CNTI-KubectlClient.Utils.scale: Scale Deployment/coredns-coredns to 1 replicas [2025-07-10 11:53:50] WARN -- CNTI-KubectlClient.Utils.scale.cmd: stderr: Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead [2025-07-10 11:53:50] DEBUG -- CNTI: target_replica_count: 1 [2025-07-10 11:53:50] DEBUG -- CNTI: current_replicas before get Deployment: 1 ✔️ 🏆PASSED: [increase_decrease_capacity] Replicas increased to 3 and decreased to 1 📦📈📉 Compatibility, installability, and upgradeability results: 1 of 1 tests passed  State Tests [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'increase_decrease_capacity' emoji: 📦📈📉 [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'increase_decrease_capacity' tags: ["compatibility", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points: Task: 'increase_decrease_capacity' type: essential [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'increase_decrease_capacity' tags: ["compatibility", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points: Task: 'increase_decrease_capacity' type: essential [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.upsert_task-increase_decrease_capacity: Task start time: 2025-07-10 11:53:32 UTC, end time: 2025-07-10 11:53:50 UTC [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.upsert_task-increase_decrease_capacity: Task: 'increase_decrease_capacity' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:18.201271318 [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["increase_decrease_capacity"] for tags: ["compatibility", "cert"] [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["compatibility", "cert"] [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["compatibility", "cert"] [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["increase_decrease_capacity"] for tags: ["compatibility", "cert"] [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["compatibility", "cert"] [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["compatibility", "cert"] [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["essential"] [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: selinux_options is worth: 100 points [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-07-10 11:53:50] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1900, max tasks passed: 19 for tags: ["essential"] [2025-07-10 11:53:50] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => nil, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:53:50] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:53:50] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:53:50] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 100} [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-07-10 11:53:50] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:53:50] INFO -- CNTI: install litmus [2025-07-10 11:53:50] INFO -- CNTI-KubectlClient.Apply.namespace: Create a namespace: litmus [2025-07-10 11:53:50] INFO -- CNTI-Label.namespace: command: kubectl label namespace litmus pod-security.kubernetes.io/enforce=privileged [2025-07-10 11:53:50] DEBUG -- CNTI-Label.namespace: output: namespace/litmus labeled [2025-07-10 11:53:50] INFO -- CNTI: install litmus operator [2025-07-10 11:53:50] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file https://litmuschaos.github.io/litmus/litmus-operator-v3.6.0.yaml [2025-07-10 11:53:51] WARN -- CNTI-KubectlClient.Apply.file.cmd: stderr: Warning: resource namespaces/litmus is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "chaos-operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "chaos-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "chaos-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "chaos-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") [2025-07-10 11:53:51] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:53:51] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:53:51] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:53:51] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:53:51] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:53:51] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:53:51] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:53:51] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:53:51] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [node_drain] [2025-07-10 11:53:51] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:53:51] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:53:51] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:53:51] INFO -- CNTI-CNFManager.Task.task_runner.node_drain: Starting test [2025-07-10 11:53:51] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-07-10 11:53:51] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:53:51] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:53:51] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:53:51] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:53:51] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-07-10 11:53:51] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-07-10 11:53:51] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:51] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-07-10 11:53:51] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:51] INFO -- CNTI: Current Resource Name: Deployment/coredns-coredns Namespace: cnf-default [2025-07-10 11:53:51] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-07-10 11:53:51] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:53:51] DEBUG -- CNTI-KubectlClient.Get.schedulable_nodes_list: Retrieving list of schedulable nodes [2025-07-10 11:53:51] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-07-10 11:53:52] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-07-10 11:53:53] INFO -- CNTI-KubectlClient.Get.schedulable_nodes_list: Retrieved schedulable nodes list: v132-worker, v132-worker2 [2025-07-10 11:53:53] INFO -- CNTI: Getting the operator node name: kubectl get pods -l app.kubernetes.io/instance=coredns -n cnf-default -o=jsonpath='{.items[0].spec.nodeName}' [2025-07-10 11:53:53] DEBUG -- CNTI: status_code: 0 [2025-07-10 11:53:53] INFO -- CNTI: Found node to cordon v132-worker2 using label app.kubernetes.io/instance='coredns' in cnf-default namespace. [2025-07-10 11:53:53] INFO -- CNTI-KubectlClient.Utils.cordon: Cordon node v132-worker2 [2025-07-10 11:53:53] INFO -- CNTI: Cordoned node v132-worker2 successfully. [2025-07-10 11:53:53] DEBUG -- CNTI-node_drain: Getting the app node name kubectl get pods -l app.kubernetes.io/instance=coredns -n cnf-default -o=jsonpath='{.items[0].spec.nodeName}' [2025-07-10 11:53:53] DEBUG -- CNTI-node_drain: status_code: 0 [2025-07-10 11:53:53] DEBUG -- CNTI-node_drain: Getting the app node name kubectl get pods -n litmus -l app.kubernetes.io/name=litmus -o=jsonpath='{.items[0].spec.nodeName}' [2025-07-10 11:53:53] DEBUG -- CNTI-node_drain: status_code: 0 [2025-07-10 11:53:53] INFO -- CNTI: Workload Node Name: v132-worker2 [2025-07-10 11:53:53] INFO -- CNTI: Litmus Node Name: v132-worker [2025-07-10 11:53:53] INFO -- CNTI: download_template url, filename: https://raw.githubusercontent.com/litmuschaos/chaos-charts/3.6.0/faults/kubernetes/node-drain/fault.yaml, node_drain_experiment.yaml [2025-07-10 11:53:53] INFO -- CNTI: chaos_manifests_path [2025-07-10 11:53:53] INFO -- CNTI: filepath: /home/xtesting/.cnf-testsuite/tools/chaos-experiments/node_drain_experiment.yaml [2025-07-10 11:53:53] DEBUG -- CNTI-http.client: Performing request [2025-07-10 11:53:53] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file /home/xtesting/.cnf-testsuite/tools/chaos-experiments/node_drain_experiment.yaml [2025-07-10 11:53:54] INFO -- CNTI: download_template url, filename: https://raw.githubusercontent.com/litmuschaos/chaos-charts/2.6.0/charts/generic/node-drain/rbac.yaml, node_drain_rbac.yaml [2025-07-10 11:53:54] INFO -- CNTI: chaos_manifests_path [2025-07-10 11:53:54] INFO -- CNTI: filepath: /home/xtesting/.cnf-testsuite/tools/chaos-experiments/node_drain_rbac.yaml [2025-07-10 11:53:54] DEBUG -- CNTI-http.client: Performing request [2025-07-10 11:53:54] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file /home/xtesting/.cnf-testsuite/tools/chaos-experiments/node_drain_rbac.yaml [2025-07-10 11:53:54] INFO -- CNTI-KubectlClient.Utils.annotate: Annotate deployment/coredns-coredns with litmuschaos.io/chaos="true" [2025-07-10 11:53:54] WARN -- CNTI-KubectlClient.Utils.annotate.cmd: stderr: Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "coredns" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "coredns" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "coredns" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "coredns" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") [2025-07-10 11:53:54] INFO -- CNTI-node_drain: Chaos test name: coredns-coredns-847c8a99; Experiment name: node-drain; Label app.kubernetes.io/instance=coredns; namespace: cnf-default [2025-07-10 11:53:54] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file installed_cnf_files/temp_files/node-drain-chaosengine.yml [2025-07-10 11:53:54] INFO -- CNTI: wait_for_test: coredns-coredns-847c8a99-node-drain [2025-07-10 11:53:54] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:53:55] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:53:57] DEBUG -- CNTI: Time left: 1798 seconds [2025-07-10 11:53:57] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:53:57] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:53:59] DEBUG -- CNTI: Time left: 1796 seconds [2025-07-10 11:53:59] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:53:59] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:01] DEBUG -- CNTI: Time left: 1794 seconds [2025-07-10 11:54:01] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:01] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:03] DEBUG -- CNTI: Time left: 1792 seconds [2025-07-10 11:54:03] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:03] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:05] DEBUG -- CNTI: Time left: 1790 seconds [2025-07-10 11:54:05] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:05] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:07] DEBUG -- CNTI: Time left: 1788 seconds [2025-07-10 11:54:07] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:07] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:09] DEBUG -- CNTI: Time left: 1786 seconds [2025-07-10 11:54:09] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:09] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:11] DEBUG -- CNTI: Time left: 1784 seconds [2025-07-10 11:54:11] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:12] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:14] DEBUG -- CNTI: Time left: 1781 seconds [2025-07-10 11:54:14] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:14] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:16] DEBUG -- CNTI: Time left: 1779 seconds [2025-07-10 11:54:16] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:16] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:18] DEBUG -- CNTI: Time left: 1777 seconds [2025-07-10 11:54:18] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:18] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:20] DEBUG -- CNTI: Time left: 1775 seconds [2025-07-10 11:54:20] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:20] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:22] DEBUG -- CNTI: Time left: 1773 seconds [2025-07-10 11:54:22] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:22] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:24] DEBUG -- CNTI: Time left: 1771 seconds [2025-07-10 11:54:24] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:24] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:26] DEBUG -- CNTI: Time left: 1769 seconds [2025-07-10 11:54:26] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:26] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:28] DEBUG -- CNTI: Time left: 1767 seconds [2025-07-10 11:54:28] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:28] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:30] DEBUG -- CNTI: Time left: 1765 seconds [2025-07-10 11:54:30] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:31] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:33] DEBUG -- CNTI: Time left: 1762 seconds [2025-07-10 11:54:33] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:33] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:35] DEBUG -- CNTI: Time left: 1760 seconds [2025-07-10 11:54:35] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:35] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:37] DEBUG -- CNTI: Time left: 1758 seconds [2025-07-10 11:54:37] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:37] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:39] DEBUG -- CNTI: Time left: 1756 seconds [2025-07-10 11:54:39] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:39] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:41] DEBUG -- CNTI: Time left: 1754 seconds [2025-07-10 11:54:41] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:41] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:43] DEBUG -- CNTI: Time left: 1752 seconds [2025-07-10 11:54:43] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:43] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:45] DEBUG -- CNTI: Time left: 1750 seconds [2025-07-10 11:54:45] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:45] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:47] DEBUG -- CNTI: Time left: 1748 seconds [2025-07-10 11:54:47] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:48] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:50] DEBUG -- CNTI: Time left: 1745 seconds [2025-07-10 11:54:50] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:50] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:52] DEBUG -- CNTI: Time left: 1743 seconds [2025-07-10 11:54:52] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:52] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:54] DEBUG -- CNTI: Time left: 1741 seconds [2025-07-10 11:54:54] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:54] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:56] DEBUG -- CNTI: Time left: 1739 seconds [2025-07-10 11:54:56] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:56] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:54:58] DEBUG -- CNTI: Time left: 1737 seconds [2025-07-10 11:54:58] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:54:58] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:00] DEBUG -- CNTI: Time left: 1735 seconds [2025-07-10 11:55:00] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:00] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:02] DEBUG -- CNTI: Time left: 1733 seconds [2025-07-10 11:55:02] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:02] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:04] DEBUG -- CNTI: Time left: 1731 seconds [2025-07-10 11:55:04] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:04] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:06] DEBUG -- CNTI: Time left: 1729 seconds [2025-07-10 11:55:06] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:07] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:09] DEBUG -- CNTI: Time left: 1726 seconds [2025-07-10 11:55:09] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:09] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:11] DEBUG -- CNTI: Time left: 1724 seconds [2025-07-10 11:55:11] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:11] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:13] DEBUG -- CNTI: Time left: 1722 seconds [2025-07-10 11:55:13] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:13] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:15] DEBUG -- CNTI: Time left: 1720 seconds [2025-07-10 11:55:15] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:15] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:17] DEBUG -- CNTI: Time left: 1718 seconds [2025-07-10 11:55:17] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:17] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:19] DEBUG -- CNTI: Time left: 1716 seconds [2025-07-10 11:55:19] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:19] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:21] DEBUG -- CNTI: Time left: 1714 seconds [2025-07-10 11:55:21] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:21] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:23] DEBUG -- CNTI: Time left: 1712 seconds [2025-07-10 11:55:23] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:23] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:25] DEBUG -- CNTI: Time left: 1709 seconds [2025-07-10 11:55:25] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:26] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:28] DEBUG -- CNTI: Time left: 1707 seconds [2025-07-10 11:55:28] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:28] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:30] DEBUG -- CNTI: Time left: 1705 seconds [2025-07-10 11:55:30] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:30] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:32] DEBUG -- CNTI: Time left: 1703 seconds [2025-07-10 11:55:32] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:32] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:34] DEBUG -- CNTI: Time left: 1701 seconds [2025-07-10 11:55:34] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:34] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:36] DEBUG -- CNTI: Time left: 1699 seconds [2025-07-10 11:55:36] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:36] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:38] DEBUG -- CNTI: Time left: 1697 seconds [2025-07-10 11:55:38] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:38] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:40] DEBUG -- CNTI: Time left: 1695 seconds [2025-07-10 11:55:40] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:40] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:42] DEBUG -- CNTI: Time left: 1693 seconds [2025-07-10 11:55:42] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:43] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:45] DEBUG -- CNTI: Time left: 1690 seconds [2025-07-10 11:55:45] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:45] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:47] DEBUG -- CNTI: Time left: 1688 seconds [2025-07-10 11:55:47] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:47] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:49] DEBUG -- CNTI: Time left: 1686 seconds [2025-07-10 11:55:49] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:49] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:51] DEBUG -- CNTI: Time left: 1684 seconds [2025-07-10 11:55:51] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:51] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:53] DEBUG -- CNTI: Time left: 1682 seconds [2025-07-10 11:55:53] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:53] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:55] DEBUG -- CNTI: Time left: 1680 seconds [2025-07-10 11:55:55] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:55] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:57] DEBUG -- CNTI: Time left: 1678 seconds [2025-07-10 11:55:57] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:57] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:55:59] DEBUG -- CNTI: Time left: 1676 seconds [2025-07-10 11:55:59] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:55:59] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:01] DEBUG -- CNTI: Time left: 1674 seconds [2025-07-10 11:56:01] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:02] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:04] DEBUG -- CNTI: Time left: 1671 seconds [2025-07-10 11:56:04] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:04] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:06] DEBUG -- CNTI: Time left: 1669 seconds [2025-07-10 11:56:06] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:06] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:08] DEBUG -- CNTI: Time left: 1667 seconds [2025-07-10 11:56:08] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:08] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:10] DEBUG -- CNTI: Time left: 1665 seconds [2025-07-10 11:56:10] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:10] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:12] DEBUG -- CNTI: Time left: 1663 seconds [2025-07-10 11:56:12] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:12] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:14] DEBUG -- CNTI: Time left: 1661 seconds [2025-07-10 11:56:14] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:14] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:16] DEBUG -- CNTI: Time left: 1659 seconds [2025-07-10 11:56:16] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:16] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:18] DEBUG -- CNTI: Time left: 1657 seconds [2025-07-10 11:56:18] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:19] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:21] DEBUG -- CNTI: Time left: 1654 seconds [2025-07-10 11:56:21] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:21] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:23] DEBUG -- CNTI: Time left: 1652 seconds [2025-07-10 11:56:23] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:23] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:25] DEBUG -- CNTI: Time left: 1650 seconds [2025-07-10 11:56:25] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:25] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:27] DEBUG -- CNTI: Time left: 1648 seconds [2025-07-10 11:56:27] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:27] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:29] DEBUG -- CNTI: Time left: 1646 seconds [2025-07-10 11:56:29] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:29] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:31] DEBUG -- CNTI: Time left: 1644 seconds [2025-07-10 11:56:31] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:31] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:33] DEBUG -- CNTI: Time left: 1642 seconds [2025-07-10 11:56:33] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:33] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:35] DEBUG -- CNTI: Time left: 1640 seconds [2025-07-10 11:56:35] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:35] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:37] DEBUG -- CNTI: Time left: 1638 seconds [2025-07-10 11:56:37] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:38] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:40] DEBUG -- CNTI: Time left: 1635 seconds [2025-07-10 11:56:40] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:40] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:42] DEBUG -- CNTI: Time left: 1633 seconds [2025-07-10 11:56:42] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:42] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:44] DEBUG -- CNTI: Time left: 1631 seconds [2025-07-10 11:56:44] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:44] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:46] DEBUG -- CNTI: Time left: 1629 seconds [2025-07-10 11:56:46] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:46] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:48] DEBUG -- CNTI: Time left: 1627 seconds [2025-07-10 11:56:48] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:48] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:50] DEBUG -- CNTI: Time left: 1625 seconds [2025-07-10 11:56:50] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:50] INFO -- CNTI: status_code: 0, response: initialized [2025-07-10 11:56:52] DEBUG -- CNTI: Time left: 1623 seconds [2025-07-10 11:56:52] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-847c8a99 -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-07-10 11:56:52] INFO -- CNTI: status_code: 0, response: completed [2025-07-10 11:56:52] INFO -- CNTI: Getting litmus status info: kubectl get chaosresults.litmuschaos.io coredns-coredns-847c8a99-node-drain -n cnf-default -o 'jsonpath={.status.experimentStatus.verdict}' [2025-07-10 11:56:53] INFO -- CNTI: status_code: 0, response: Pass [2025-07-10 11:56:53] INFO -- CNTI: Getting litmus status info: kubectl get chaosresult.litmuschaos.io coredns-coredns-847c8a99-node-drain -n cnf-default -o 'jsonpath={.status.experimentStatus.verdict}' [2025-07-10 11:56:53] INFO -- CNTI: status_code: 0, response: Pass [2025-07-10 11:56:53] INFO -- CNTI-KubectlClient.Utils.uncordon: Uncordon node v132-worker2 ✔️ 🏆PASSED: [node_drain] node_drain chaos test passed 🗡️💀♻ State results: 1 of 1 tests passed  Security Tests [2025-07-10 11:56:53] INFO -- CNTI: Uncordoned node v132-worker2 successfully. [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.workload_resource_test: Container result: true [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'node_drain' emoji: 🗡️💀♻ [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'node_drain' tags: ["state", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points: Task: 'node_drain' type: essential [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'node_drain' tags: ["state", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points: Task: 'node_drain' type: essential [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.upsert_task-node_drain: Task start time: 2025-07-10 11:53:51 UTC, end time: 2025-07-10 11:56:53 UTC [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.upsert_task-node_drain: Task: 'node_drain' has status: 'passed' and is awarded: 100 points.Runtime: 00:03:01.624788178 [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["node_drain"] for tags: ["state", "cert"] [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["state", "cert"] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["state", "cert"] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["node_drain"] for tags: ["state", "cert"] [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["state", "cert"] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["state", "cert"] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["essential"] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: selinux_options is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1900, max tasks passed: 19 for tags: ["essential"] [2025-07-10 11:56:53] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:56:53] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:56:53] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:56:53] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 100} [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:56:53] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:56:53] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:56:53] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:56:53] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:56:53] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [privileged_containers] [2025-07-10 11:56:53] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Task.task_runner.privileged_containers: Starting test [2025-07-10 11:56:53] DEBUG -- CNTI: white_list_container_names [] [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:56:53] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-07-10 11:56:53] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-07-10 11:56:53] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:56:53] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-07-10 11:56:53] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:56:53] DEBUG -- CNTI-KubectlClient.Get.privileged_containers: Get privileged containers [2025-07-10 11:56:53] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:56:53] INFO -- CNTI-KubectlClient.Get.privileged_containers: Found 8 privileged containers [2025-07-10 11:56:53] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-07-10 11:56:53] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns ✔️ 🏆PASSED: [privileged_containers] No privileged containers 🔓🔑 [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.workload_resource_test: Container result: true [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-07-10 11:56:53] DEBUG -- CNTI: violator list: [] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'privileged_containers' emoji: 🔓🔑 [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'privileged_containers' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points: Task: 'privileged_containers' type: essential [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'privileged_containers' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points: Task: 'privileged_containers' type: essential [2025-07-10 11:56:53] DEBUG -- CNTI-CNFManager.Points.upsert_task-privileged_containers: Task start time: 2025-07-10 11:56:53 UTC, end time: 2025-07-10 11:56:53 UTC [2025-07-10 11:56:53] INFO -- CNTI-CNFManager.Points.upsert_task-privileged_containers: Task: 'privileged_containers' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.509647073 [2025-07-10 11:56:53] INFO -- CNTI-Setup.kubescape_framework_download: Downloading Kubescape testing framework [2025-07-10 11:56:53] DEBUG -- CNTI-http.client: Performing request [2025-07-10 11:56:54] DEBUG -- CNTI-http.client: Performing request [2025-07-10 11:56:54] DEBUG -- CNTI-Setup.kubescape_framework_download: Downloaded Kubescape framework json [2025-07-10 11:56:54] INFO -- CNTI-Setup.kubescape_framework_download: Kubescape framework json has been downloaded [2025-07-10 11:56:54] INFO -- CNTI-Setup.install_kubescape: Installing Kubescape tool [2025-07-10 11:56:54] DEBUG -- CNTI-http.client: Performing request [2025-07-10 11:56:54] DEBUG -- CNTI-http.client: Performing request [2025-07-10 11:56:57] DEBUG -- CNTI-Setup.install_kubescape: Downloaded Kubescape binary [2025-07-10 11:56:57] INFO -- CNTI-ShellCmd.run: command: chmod +x /home/xtesting/.cnf-testsuite/tools/kubescape/kubescape [2025-07-10 11:56:57] DEBUG -- CNTI-ShellCmd.run: output: [2025-07-10 11:56:57] INFO -- CNTI-Setup.install_kubescape: Kubescape tool has been installed [2025-07-10 11:56:57] INFO -- CNTI-Setup.kubescape_scan: Perform Kubescape cluster scan [2025-07-10 11:56:57] INFO -- CNTI: scan command: /home/xtesting/.cnf-testsuite/tools/kubescape/kubescape scan framework nsa --use-from /home/xtesting/.cnf-testsuite/tools/kubescape/nsa.json --output kubescape_results.json --format json --format-version=v1 --exclude-namespaces kube-system,kube-public,kube-node-lease,local-path-storage,litmus,cnf-testsuite [2025-07-10 11:57:02] INFO -- CNTI: output: ────────────────────────────────────────────────── Framework scanned: NSA ┌─────────────────┬────┐ │ Controls │ 25 │ │ Passed │ 11 │ │ Failed │ 9 │ │ Action Required │ 5 │ └─────────────────┴────┘ Failed resources by severity: ┌──────────┬────┐ │ Critical │ 0 │ │ High │ 0 │ │ Medium │ 11 │ │ Low │ 1 │ └──────────┴────┘ Run with '--verbose'/'-v' to see control failures for each resource. ┌──────────┬────────────────────────────────────────────────────┬──────────────────┬───────────────┬────────────────────┐ │ Severity │ Control name │ Failed resources │ All Resources │ Compliance score │ ├──────────┼────────────────────────────────────────────────────┼──────────────────┼───────────────┼────────────────────┤ │ Critical │ Disable anonymous access to Kubelet service │ 0 │ 0 │ Action Required ** │ │ Critical │ Enforce Kubelet client TLS authentication │ 0 │ 0 │ Action Required ** │ │ Medium │ Prevent containers from allowing command execution │ 2 │ 23 │ 91% │ │ Medium │ Non-root containers │ 1 │ 1 │ 0% │ │ Medium │ Allow privilege escalation │ 1 │ 1 │ 0% │ │ Medium │ Ingress and Egress blocked │ 1 │ 1 │ 0% │ │ Medium │ Automatic mapping of service account │ 3 │ 4 │ 25% │ │ Medium │ Administrative Roles │ 1 │ 23 │ 96% │ │ Medium │ Cluster internal networking │ 1 │ 2 │ 50% │ │ Medium │ Linux hardening │ 1 │ 1 │ 0% │ │ Medium │ Secret/etcd encryption enabled │ 0 │ 0 │ Action Required * │ │ Medium │ Audit logs enabled │ 0 │ 0 │ Action Required * │ │ Low │ Immutable container filesystem │ 1 │ 1 │ 0% │ │ Low │ PSP enabled │ 0 │ 0 │ Action Required * │ ├──────────┼────────────────────────────────────────────────────┼──────────────────┼───────────────┼────────────────────┤ │ │ Resource Summary │ 6 │ 33 │ 54.48% │ └──────────┴────────────────────────────────────────────────────┴──────────────────┴───────────────┴────────────────────┘ 🚨 * failed to get cloud provider, cluster: kind-v132 🚨 ** This control is scanned exclusively by the Kubescape operator, not the Kubescape CLI. Install the Kubescape operator: https://kubescape.io/docs/install-operator/. [2025-07-10 11:57:02] INFO -- CNTI: stderr: {"level":"info","ts":"2025-07-10T11:56:57Z","msg":"Kubescape scanner initializing..."} {"level":"warn","ts":"2025-07-10T11:56:59Z","msg":"Deprecated format version","run":"--format-version=v2"} {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Initialized scanner"} {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Loading policies..."} {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Loaded policies"} {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Loading exceptions..."} {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Loaded exceptions"} {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Loading account configurations..."} {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Loaded account configurations"} {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Accessing Kubernetes objects..."} {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Accessed Kubernetes objects"} {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Scanning","Cluster":"kind-v132"} {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Done scanning","Cluster":"kind-v132"} {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Done aggregating results"} {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Scan results saved","filename":"kubescape_results.json"} Overall compliance-score (100- Excellent, 0- All failed): 54 {"level":"info","ts":"2025-07-10T11:57:02Z","msg":"Received interrupt signal, exiting..."} [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:02] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:57:02] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:02] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:02] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:57:02] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:57:02] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:02] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [non_root_containers] Failed resource: Deployment coredns-coredns in cnf-default namespace Remediation: If your application does not need root privileges, make sure to define runAsNonRoot as true or explicitly set the runAsUser using ID 1000 or higher under the PodSecurityContext or container securityContext. In addition, set an explicit value for runAsGroup using ID 1000 or higher. ✖️ 🏆FAILED: [non_root_containers] Found containers running with root user or user with root group membership 🔓🔑 [2025-07-10 11:57:02] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:02] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:57:02] INFO -- CNTI-CNFManager.Task.task_runner.non_root_containers: Starting test [2025-07-10 11:57:02] INFO -- CNTI: kubescape parse [2025-07-10 11:57:02] INFO -- CNTI: kubescape test_by_test_name [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:02] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'non_root_containers' emoji: 🔓🔑 [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'non_root_containers' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.Points: Task: 'non_root_containers' type: essential [2025-07-10 11:57:02] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 0 points [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'non_root_containers' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.Points: Task: 'non_root_containers' type: essential [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.Points.upsert_task-non_root_containers: Task start time: 2025-07-10 11:57:02 UTC, end time: 2025-07-10 11:57:02 UTC [2025-07-10 11:57:02] INFO -- CNTI-CNFManager.Points.upsert_task-non_root_containers: Task: 'non_root_containers' has status: 'failed' and is awarded: 0 points.Runtime: 00:00:00.042267532 [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:02] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:57:02] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:02] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:02] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:57:02] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:57:02] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:02] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [cpu_limits] ✔️ 🏆PASSED: [cpu_limits] Containers have CPU limits set 🔓🔑 [2025-07-10 11:57:02] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:02] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:02] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:57:02] INFO -- CNTI-CNFManager.Task.task_runner.cpu_limits: Starting test [2025-07-10 11:57:02] INFO -- CNTI: kubescape parse [2025-07-10 11:57:03] INFO -- CNTI: kubescape test_by_test_name [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'cpu_limits' emoji: 🔓🔑 [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'cpu_limits' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Points: Task: 'cpu_limits' type: essential [2025-07-10 11:57:03] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'cpu_limits' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Points: Task: 'cpu_limits' type: essential [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Points.upsert_task-cpu_limits: Task start time: 2025-07-10 11:57:02 UTC, end time: 2025-07-10 11:57:03 UTC [2025-07-10 11:57:03] INFO -- CNTI-CNFManager.Points.upsert_task-cpu_limits: Task: 'cpu_limits' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.025843188 [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:03] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:57:03] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:03] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:03] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:57:03] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:57:03] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:03] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [memory_limits] ✔️ 🏆PASSED: [memory_limits] Containers have memory limits set 🔓🔑 [2025-07-10 11:57:03] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:03] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:57:03] INFO -- CNTI-CNFManager.Task.task_runner.memory_limits: Starting test [2025-07-10 11:57:03] INFO -- CNTI: kubescape parse [2025-07-10 11:57:03] INFO -- CNTI: kubescape test_by_test_name [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:03] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'memory_limits' emoji: 🔓🔑 [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'memory_limits' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Points: Task: 'memory_limits' type: essential [2025-07-10 11:57:03] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'memory_limits' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Points: Task: 'memory_limits' type: essential [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Points.upsert_task-memory_limits: Task start time: 2025-07-10 11:57:03 UTC, end time: 2025-07-10 11:57:03 UTC [2025-07-10 11:57:03] INFO -- CNTI-CNFManager.Points.upsert_task-memory_limits: Task: 'memory_limits' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.026876816 [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:03] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:57:03] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:03] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:03] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:57:03] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:57:03] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:03] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [hostpath_mounts] [2025-07-10 11:57:03] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:03] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:03] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:57:03] INFO -- CNTI-CNFManager.Task.task_runner.hostpath_mounts: Starting test [2025-07-10 11:57:03] INFO -- CNTI: scan command: /home/xtesting/.cnf-testsuite/tools/kubescape/kubescape scan control C-0048 --output kubescape_C-0048_results.json --format json --format-version=v1 --exclude-namespaces kube-system,kube-public,kube-node-lease,local-path-storage,litmus,cnf-testsuite ✔️ 🏆PASSED: [hostpath_mounts] Containers do not have hostPath mounts 🔓🔑 [2025-07-10 11:57:07] INFO -- CNTI: output: ────────────────────────────────────────────────── ┌─────────────────┬───┐ │ Controls │ 1 │ │ Passed │ 1 │ │ Failed │ 0 │ │ Action Required │ 0 │ └─────────────────┴───┘ Failed resources by severity: ┌──────────┬───┐ │ Critical │ 0 │ │ High │ 0 │ │ Medium │ 0 │ │ Low │ 0 │ └──────────┴───┘ Run with '--verbose'/'-v' to see control failures for each resource. ┌──────────┬──────────────────┬──────────────────┬───────────────┬──────────────────┐ │ Severity │ Control name │ Failed resources │ All Resources │ Compliance score │ ├──────────┼──────────────────┼──────────────────┼───────────────┼──────────────────┤ │ High │ HostPath mount │ 0 │ 1 │ 100% │ ├──────────┼──────────────────┼──────────────────┼───────────────┼──────────────────┤ │ │ Resource Summary │ 0 │ 1 │ 100.00% │ └──────────┴──────────────────┴──────────────────┴───────────────┴──────────────────┘ [2025-07-10 11:57:07] INFO -- CNTI: stderr: {"level":"info","ts":"2025-07-10T11:57:03Z","msg":"Kubescape scanner initializing..."} {"level":"warn","ts":"2025-07-10T11:57:04Z","msg":"Deprecated format version","run":"--format-version=v2"} {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Initialized scanner"} {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Loading policies..."} {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Loaded policies"} {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Loading exceptions..."} {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Loaded exceptions"} {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Loading account configurations..."} {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Loaded account configurations"} {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Accessing Kubernetes objects..."} {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Accessed Kubernetes objects"} {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Scanning","Cluster":"kind-v132"} {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Done scanning","Cluster":"kind-v132"} {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Done aggregating results"} {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Scan results saved","filename":"kubescape_C-0048_results.json"} Overall compliance-score (100- Excellent, 0- All failed): 100 {"level":"info","ts":"2025-07-10T11:57:07Z","msg":"Run with '--verbose'/'-v' flag for detailed resources view\n"} [2025-07-10 11:57:07] INFO -- CNTI: kubescape parse [2025-07-10 11:57:07] INFO -- CNTI: kubescape test_by_test_name [2025-07-10 11:57:07] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:57:07] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:07] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:57:07] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'hostpath_mounts' emoji: 🔓🔑 [2025-07-10 11:57:07] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hostpath_mounts' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:07] DEBUG -- CNTI-CNFManager.Points: Task: 'hostpath_mounts' type: essential [2025-07-10 11:57:07] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-07-10 11:57:07] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hostpath_mounts' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:07] DEBUG -- CNTI-CNFManager.Points: Task: 'hostpath_mounts' type: essential [2025-07-10 11:57:07] DEBUG -- CNTI-CNFManager.Points.upsert_task-hostpath_mounts: Task start time: 2025-07-10 11:57:03 UTC, end time: 2025-07-10 11:57:07 UTC [2025-07-10 11:57:07] INFO -- CNTI-CNFManager.Points.upsert_task-hostpath_mounts: Task: 'hostpath_mounts' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:04.683414745 [2025-07-10 11:57:07] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:07] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:57:07] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:07] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:07] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:57:07] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:57:07] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:57:07] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:07] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [container_sock_mounts] [2025-07-10 11:57:07] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:07] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:07] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:57:07] INFO -- CNTI-CNFManager.Task.task_runner.container_sock_mounts: Starting test [2025-07-10 11:57:07] DEBUG -- CNTI-http.client: Performing request [2025-07-10 11:57:08] DEBUG -- CNTI-http.client: Performing request [2025-07-10 11:57:09] INFO -- CNTI: TarClient.untar command: tar -xvf /tmp/kyvernok7wxncdr.tar.gz -C /home/xtesting/.cnf-testsuite/tools [2025-07-10 11:57:09] INFO -- CNTI: TarClient.untar output: LICENSE kyverno [2025-07-10 11:57:09] INFO -- CNTI: TarClient.untar stderr: [2025-07-10 11:57:09] INFO -- CNTI: GitClient.clone command: --branch release-1.9 https://github.com/kyverno/policies.git /home/xtesting/.cnf-testsuite/tools/kyverno-policies [2025-07-10 11:57:11] INFO -- CNTI: GitClient.clone output: [2025-07-10 11:57:11] INFO -- CNTI: GitClient.clone stderr: Cloning into '/home/xtesting/.cnf-testsuite/tools/kyverno-policies'... [2025-07-10 11:57:11] INFO -- CNTI-kyverno_policy_path: command: ls /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_cri_sock_mount/disallow_cri_sock_mount.yaml [2025-07-10 11:57:11] INFO -- CNTI-kyverno_policy_path: output: /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_cri_sock_mount/disallow_cri_sock_mount.yaml [2025-07-10 11:57:11] INFO -- CNTI-Kyverno::PolicyAudit.run: command: /home/xtesting/.cnf-testsuite/tools/kyverno apply /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_cri_sock_mount/disallow_cri_sock_mount.yaml --cluster --policy-report ✔️ 🏆PASSED: [container_sock_mounts] Container engine daemon sockets are not mounted as volumes 🔓🔑 [2025-07-10 11:57:13] INFO -- CNTI-Kyverno::PolicyAudit.run: output: Applying 3 policy rules to 28 resources... ---------------------------------------------------------------------- POLICY REPORT: ---------------------------------------------------------------------- apiVersion: wgpolicyk8s.io/v1alpha2 kind: ClusterPolicyReport metadata: name: clusterpolicyreport results: - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148633 summary: error: 0 fail: 0 pass: 84 skip: 168 warn: 0 [2025-07-10 11:57:13] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'container_sock_mounts' emoji: 🔓🔑 [2025-07-10 11:57:13] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'container_sock_mounts' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:13] DEBUG -- CNTI-CNFManager.Points: Task: 'container_sock_mounts' type: essential [2025-07-10 11:57:13] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-07-10 11:57:13] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'container_sock_mounts' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:13] DEBUG -- CNTI-CNFManager.Points: Task: 'container_sock_mounts' type: essential [2025-07-10 11:57:13] DEBUG -- CNTI-CNFManager.Points.upsert_task-container_sock_mounts: Task start time: 2025-07-10 11:57:07 UTC, end time: 2025-07-10 11:57:13 UTC [2025-07-10 11:57:13] INFO -- CNTI-CNFManager.Points.upsert_task-container_sock_mounts: Task: 'container_sock_mounts' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:05.918165018 [2025-07-10 11:57:13] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:13] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:57:13] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:13] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:13] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:57:13] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:57:13] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:57:13] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:13] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [selinux_options] [2025-07-10 11:57:13] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:13] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:13] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:57:13] INFO -- CNTI-CNFManager.Task.task_runner.selinux_options: Starting test [2025-07-10 11:57:13] INFO -- CNTI-kyverno_policy_path: command: ls /home/xtesting/.cnf-testsuite/tools/custom-kyverno-policies/check-selinux-enabled.yml [2025-07-10 11:57:13] INFO -- CNTI-kyverno_policy_path: output: /home/xtesting/.cnf-testsuite/tools/custom-kyverno-policies/check-selinux-enabled.yml [2025-07-10 11:57:13] INFO -- CNTI-Kyverno::PolicyAudit.run: command: /home/xtesting/.cnf-testsuite/tools/kyverno apply /home/xtesting/.cnf-testsuite/tools/custom-kyverno-policies/check-selinux-enabled.yml --cluster --policy-report [2025-07-10 11:57:15] INFO -- CNTI-Kyverno::PolicyAudit.run: output: Applying 1 policy rule to 28 resources... ---------------------------------------------------------------------- POLICY REPORT: ---------------------------------------------------------------------- apiVersion: wgpolicyk8s.io/v1alpha2 kind: ClusterPolicyReport metadata: name: clusterpolicyreport results: - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148635 summary: error: 0 fail: 0 pass: 28 skip: 56 warn: 0 [2025-07-10 11:57:15] INFO -- CNTI-kyverno_policy_path: command: ls /home/xtesting/.cnf-testsuite/tools/kyverno-policies/pod-security/baseline/disallow-selinux/disallow-selinux.yaml [2025-07-10 11:57:15] INFO -- CNTI-kyverno_policy_path: output: /home/xtesting/.cnf-testsuite/tools/kyverno-policies/pod-security/baseline/disallow-selinux/disallow-selinux.yaml [2025-07-10 11:57:15] INFO -- CNTI-Kyverno::PolicyAudit.run: command: /home/xtesting/.cnf-testsuite/tools/kyverno apply /home/xtesting/.cnf-testsuite/tools/kyverno-policies/pod-security/baseline/disallow-selinux/disallow-selinux.yaml --cluster --policy-report ⏭️ 🏆N/A: [selinux_options] Pods are not using SELinux 🔓🔑 Security results: 5 of 6 tests passed  Configuration Tests [2025-07-10 11:57:18] INFO -- CNTI-Kyverno::PolicyAudit.run: output: Applying 2 policy rules to 28 resources... ---------------------------------------------------------------------- POLICY REPORT: ---------------------------------------------------------------------- apiVersion: wgpolicyk8s.io/v1alpha2 kind: ClusterPolicyReport metadata: name: clusterpolicyreport results: - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148638 summary: error: 0 fail: 0 pass: 56 skip: 112 warn: 0 [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'selinux_options' emoji: 🔓🔑 [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'selinux_options' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points: Task: 'selinux_options' type: essential [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: selinux_options is worth: 0 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'selinux_options' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points: Task: 'selinux_options' type: essential [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.upsert_task-selinux_options: Task start time: 2025-07-10 11:57:13 UTC, end time: 2025-07-10 11:57:18 UTC [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.upsert_task-selinux_options: Task: 'selinux_options' has status: 'na' and is awarded: 0 points.Runtime: 00:00:04.555555274 [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "container_sock_mounts", "selinux_options"] for tags: ["security", "cert"] [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 500, total tasks passed: 5 for tags: ["security", "cert"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 600, max tasks passed: 6 for tags: ["security", "cert"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "container_sock_mounts", "selinux_options"] for tags: ["security", "cert"] [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 500, total tasks passed: 5 for tags: ["security", "cert"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 600, max tasks passed: 6 for tags: ["security", "cert"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 700, total tasks passed: 7 for tags: ["essential"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-07-10 11:57:18] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}]} [2025-07-10 11:57:18] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}]} [2025-07-10 11:57:18] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}]} [2025-07-10 11:57:18] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}], "maximum_points" => 600} [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:18] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:57:18] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:57:18] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:57:18] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:18] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [hostport_not_used] [2025-07-10 11:57:18] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Task.task_runner.hostport_not_used: Starting test [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:57:18] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-07-10 11:57:18] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-07-10 11:57:18] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:18] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-07-10 11:57:18] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:18] INFO -- CNTI-hostport_not_used: hostport_not_used resource: {kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"} [2025-07-10 11:57:18] INFO -- CNTI-hostport_not_used: resource kind: {kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"} [2025-07-10 11:57:18] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:18] DEBUG -- CNTI-hostport_not_used: resource: {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"annotations" => {"deployment.kubernetes.io/revision" => "1", "litmuschaos.io/chaos" => "true", "meta.helm.sh/release-name" => "coredns", "meta.helm.sh/release-namespace" => "cnf-default"}, "creationTimestamp" => "2025-07-10T11:53:17Z", "generation" => 4, "labels" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/name" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS"}, "name" => "coredns-coredns", "namespace" => "cnf-default", "resourceVersion" => "4608152", "uid" => "5e1c6b12-3b86-4260-90e2-120af170cd9b"}, "spec" => {"progressDeadlineSeconds" => 600, "replicas" => 1, "revisionHistoryLimit" => 10, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"}}, "strategy" => {"rollingUpdate" => {"maxSurge" => "25%", "maxUnavailable" => 1}, "type" => "RollingUpdate"}, "template" => {"metadata" => {"annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}, "creationTimestamp" => nil, "labels" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"}}, "spec" => {"containers" => [{"args" => ["-conf", "/etc/coredns/Corefile"], "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "livenessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "name" => "coredns", "ports" => [{"containerPort" => 53, "name" => "udp-53", "protocol" => "UDP"}, {"containerPort" => 53, "name" => "tcp-53", "protocol" => "TCP"}], "readinessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "terminationMessagePath" => "/dev/termination-log", "terminationMessagePolicy" => "File", "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}]}], "dnsPolicy" => "Default", "restartPolicy" => "Always", "schedulerName" => "default-scheduler", "securityContext" => {}, "serviceAccount" => "default", "serviceAccountName" => "default", "terminationGracePeriodSeconds" => 30, "volumes" => [{"configMap" => {"defaultMode" => 420, "items" => [{"key" => "Corefile", "path" => "Corefile"}], "name" => "coredns-coredns"}, "name" => "config-volume"}]}}}, "status" => {"availableReplicas" => 1, "conditions" => [{"lastTransitionTime" => "2025-07-10T11:53:17Z", "lastUpdateTime" => "2025-07-10T11:53:32Z", "message" => "ReplicaSet \"coredns-coredns-64fc886fd4\" has successfully progressed.", "reason" => "NewReplicaSetAvailable", "status" => "True", "type" => "Progressing"}, {"lastTransitionTime" => "2025-07-10T11:53:46Z", "lastUpdateTime" => "2025-07-10T11:53:46Z", "message" => "Deployment has minimum availability.", "reason" => "MinimumReplicasAvailable", "status" => "True", "type" => "Available"}], "observedGeneration" => 4, "readyReplicas" => 1, "replicas" => 1, "updatedReplicas" => 1}} [2025-07-10 11:57:18] DEBUG -- CNTI-hostport_not_used: containers: [{"args" => ["-conf", "/etc/coredns/Corefile"], "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "livenessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "name" => "coredns", "ports" => [{"containerPort" => 53, "name" => "udp-53", "protocol" => "UDP"}, {"containerPort" => 53, "name" => "tcp-53", "protocol" => "TCP"}], "readinessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "terminationMessagePath" => "/dev/termination-log", "terminationMessagePolicy" => "File", "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}]}] [2025-07-10 11:57:18] DEBUG -- CNTI-hostport_not_used: single_port: {"containerPort" => 53, "name" => "udp-53", "protocol" => "UDP"} [2025-07-10 11:57:18] DEBUG -- CNTI-hostport_not_used: DAS hostPort: [2025-07-10 11:57:18] DEBUG -- CNTI-hostport_not_used: single_port: {"containerPort" => 53, "name" => "tcp-53", "protocol" => "TCP"} [2025-07-10 11:57:18] DEBUG -- CNTI-hostport_not_used: DAS hostPort: [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-07-10 11:57:18] INFO -- CNTI-hostport_not_used: hostport_not_used resource: {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"} [2025-07-10 11:57:18] INFO -- CNTI-hostport_not_used: resource kind: {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"} [2025-07-10 11:57:18] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Service/coredns-coredns ✔️ 🏆PASSED: [hostport_not_used] HostPort is not used  [2025-07-10 11:57:18] DEBUG -- CNTI-hostport_not_used: resource: {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"annotations" => {"meta.helm.sh/release-name" => "coredns", "meta.helm.sh/release-namespace" => "cnf-default"}, "creationTimestamp" => "2025-07-10T11:53:17Z", "labels" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/name" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS"}, "name" => "coredns-coredns", "namespace" => "cnf-default", "resourceVersion" => "4607943", "uid" => "7bf35257-159c-49f1-b40b-70913ffe900e"}, "spec" => {"clusterIP" => "10.96.155.214", "clusterIPs" => ["10.96.155.214"], "internalTrafficPolicy" => "Cluster", "ipFamilies" => ["IPv4"], "ipFamilyPolicy" => "SingleStack", "ports" => [{"name" => "udp-53", "port" => 53, "protocol" => "UDP", "targetPort" => 53}, {"name" => "tcp-53", "port" => 53, "protocol" => "TCP", "targetPort" => 53}], "selector" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"}, "sessionAffinity" => "None", "type" => "ClusterIP"}, "status" => {"loadBalancer" => {}}} [2025-07-10 11:57:18] DEBUG -- CNTI-hostport_not_used: containers: [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hostport_not_used' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points: Task: 'hostport_not_used' type: essential [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hostport_not_used' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points: Task: 'hostport_not_used' type: essential [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.Points.upsert_task-hostport_not_used: Task start time: 2025-07-10 11:57:18 UTC, end time: 2025-07-10 11:57:18 UTC [2025-07-10 11:57:18] INFO -- CNTI-CNFManager.Points.upsert_task-hostport_not_used: Task: 'hostport_not_used' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.504104121 [2025-07-10 11:57:18] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:18] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:57:19] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:19] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:19] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:57:19] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:57:19] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:57:19] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:19] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [hardcoded_ip_addresses_in_k8s_runtime_configuration] [2025-07-10 11:57:19] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:19] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:19] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:57:19] INFO -- CNTI-CNFManager.Task.task_runner.hardcoded_ip_addresses_in_k8s_runtime_configuration: Starting test [2025-07-10 11:57:19] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-07-10 11:57:19] DEBUG -- CNTI: Helm Path: helm [2025-07-10 11:57:19] INFO -- CNTI-KubectlClient.Delete.resource: Delete resource namespace/hardcoded-ip-test ✔️ 🏆PASSED: [hardcoded_ip_addresses_in_k8s_runtime_configuration] No hard-coded IP addresses found in the runtime K8s configuration  [2025-07-10 11:57:19] WARN -- CNTI-KubectlClient.Delete.resource.cmd: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): namespaces "hardcoded-ip-test" not found [2025-07-10 11:57:19] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:19] DEBUG -- CNTI-CNFManager.Points: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' type: essential [2025-07-10 11:57:19] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-07-10 11:57:19] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:19] DEBUG -- CNTI-CNFManager.Points: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' type: essential [2025-07-10 11:57:19] DEBUG -- CNTI-CNFManager.Points.upsert_task-hardcoded_ip_addresses_in_k8s_runtime_configuration: Task start time: 2025-07-10 11:57:19 UTC, end time: 2025-07-10 11:57:19 UTC [2025-07-10 11:57:19] INFO -- CNTI-CNFManager.Points.upsert_task-hardcoded_ip_addresses_in_k8s_runtime_configuration: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.205064551 [2025-07-10 11:57:19] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:19] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:57:19] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:19] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:19] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:57:19] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:57:19] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:57:19] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:19] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [latest_tag] [2025-07-10 11:57:19] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:19] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:19] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:57:19] INFO -- CNTI-CNFManager.Task.task_runner.latest_tag: Starting test [2025-07-10 11:57:19] INFO -- CNTI-kyverno_policy_path: command: ls /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_latest_tag/disallow_latest_tag.yaml [2025-07-10 11:57:19] INFO -- CNTI-kyverno_policy_path: output: /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_latest_tag/disallow_latest_tag.yaml [2025-07-10 11:57:19] INFO -- CNTI-Kyverno::PolicyAudit.run: command: /home/xtesting/.cnf-testsuite/tools/kyverno apply /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_latest_tag/disallow_latest_tag.yaml --cluster --policy-report ✔️ 🏆PASSED: [latest_tag] Container images are not using the latest tag 🏷️ Configuration results: 3 of 3 tests passed  Observability and Diagnostics Tests [2025-07-10 11:57:21] INFO -- CNTI-Kyverno::PolicyAudit.run: output: Applying 2 policy rules to 28 resources... ---------------------------------------------------------------------- POLICY REPORT: ---------------------------------------------------------------------- apiVersion: wgpolicyk8s.io/v1alpha2 kind: ClusterPolicyReport metadata: name: clusterpolicyreport results: - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-xv7rs namespace: cnf-testsuite uid: 3fa33ac3-e9a2-4326-92f4-9d4148aa3d0d result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: a2e57a46-acc4-40b0-b1e4-84f4be2d05ac result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-pcf6t namespace: cnf-testsuite uid: dfb07c8a-4b18-41bc-8e12-70999dea17f2 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-8hhkt namespace: kube-system uid: 6531546e-b2c4-4813-a6f3-a53244ae9d85 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-2466n namespace: kube-system uid: f48127ba-b444-4632-9666-00508ee65a9c result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-brb7q namespace: kube-system uid: 9e7e8a68-d983-4f8f-8a3c-3679813a5ef0 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: 023d64b1-d0e6-4db2-bd86-2dffdb5c1ef1 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 3ce1ba06-b68f-4f1a-8f8f-fd5ab49af294 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-7q27c namespace: kube-system uid: e2d0ae08-7b71-4ea0-aa8f-48fc013446ad result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-zvx7j namespace: kube-system uid: cb9c16b8-10d8-4a33-a30a-a389ca6b9aa4 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-r5tsk namespace: kube-system uid: c6a13750-7bb0-4aa1-8766-81efe70beccb result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-v132-control-plane namespace: kube-system uid: fa0f5cce-077a-4b0f-ac67-880a94eff419 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v132-control-plane namespace: kube-system uid: 415f160f-3da9-44f6-8705-8a211e490d50 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-qwd9g namespace: kube-system uid: 1d807b5e-b228-4490-8496-542366a3b2d9 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-668d6bf9bc-xdthc namespace: kube-system uid: 01ae2e31-81b9-4c9f-931b-726c68d0b2c7 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v132-control-plane namespace: kube-system uid: 70e23eea-b017-4c4c-b38b-770fad6d9591 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: 08ab6e72-1221-4c4d-b204-cd645822c9a1 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: 59457e85-4401-4683-8b04-d543d38bb522 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-xkjck namespace: kube-system uid: fe401fcf-e036-4a80-85ce-657da7092395 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v132-control-plane namespace: kube-system uid: eab17d7d-9c97-4268-aea3-b84309c39bb0 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-dsrxz namespace: kube-system uid: 7f190298-4895-4e74-81e9-21b9e111d181 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-nvrsb namespace: kube-system uid: 4bf361e6-1e6d-4550-9837-760d66a7d259 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 5e1c6b12-3b86-4260-90e2-120af170cd9b result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-64fc886fd4-8ssnr namespace: cnf-default uid: cba19cd1-0b27-4e2d-ba91-01b94b523f66 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 4abeb732-9b87-40ce-a9cd-9c3bc03197ca result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-7dc846544d-ltv2t namespace: local-path-storage uid: d9a228a5-1bbf-4508-816a-88bda52965b0 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-644fbcd4b7-w79vl namespace: litmus uid: 1d8ff7a1-7ff2-4660-895f-93f7a4dc264f result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 4be3f335-85f6-4441-a6bc-c1f971e473e4 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1752148641 summary: error: 0 fail: 0 pass: 56 skip: 112 warn: 0 [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'latest_tag' emoji: 🏷️ [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'latest_tag' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points: Task: 'latest_tag' type: essential [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'latest_tag' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points: Task: 'latest_tag' type: essential [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.upsert_task-latest_tag: Task start time: 2025-07-10 11:57:19 UTC, end time: 2025-07-10 11:57:21 UTC [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.upsert_task-latest_tag: Task: 'latest_tag' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:01.944833987 [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "latest_tag"] for tags: ["configuration", "cert"] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 300, total tasks passed: 3 for tags: ["configuration", "cert"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 300, max tasks passed: 3 for tags: ["configuration", "cert"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "latest_tag"] for tags: ["configuration", "cert"] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 300, total tasks passed: 3 for tags: ["configuration", "cert"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 300, max tasks passed: 3 for tags: ["configuration", "cert"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1000, total tasks passed: 10 for tags: ["essential"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-07-10 11:57:21] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:57:21] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 300, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:57:21] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 300, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:57:21] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 300, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 300} [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:21] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:57:21] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:57:21] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:57:21] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:21] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [log_output] [2025-07-10 11:57:21] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Task.task_runner.log_output: Starting test [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:57:21] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-07-10 11:57:21] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-07-10 11:57:21] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:21] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-07-10 11:57:21] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:21] DEBUG -- CNTI-KubectlClient.Utils.logs: Dump logs of Deployment/coredns-coredns ✔️ 🏆PASSED: [log_output] Resources output logs to stdout and stderr 📶☠️ Observability and diagnostics results: 1 of 1 tests passed  Microservice Tests [2025-07-10 11:57:21] INFO -- CNTI-Log lines: [pod/coredns-coredns-64fc886fd4-8ssnr/coredns] .:53 [pod/coredns-coredns-64fc886fd4-8ssnr/coredns] [INFO] plugin/reload: Running configuration MD5 = d8c79061f144bdb41e9378f9aa781f71 [pod/coredns-coredns-64fc886fd4-8ssnr/coredns] CoreDNS-1.7.1 [pod/coredns-coredns-64fc886fd4-8ssnr/coredns] linux/amd64, go1.15.2, aa82ca6 [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.workload_resource_test: Container result: true [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'log_output' emoji: 📶☠️ [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'log_output' tags: ["observability", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points: Task: 'log_output' type: essential [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'log_output' tags: ["observability", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points: Task: 'log_output' type: essential [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.upsert_task-log_output: Task start time: 2025-07-10 11:57:21 UTC, end time: 2025-07-10 11:57:21 UTC [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.upsert_task-log_output: Task: 'log_output' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.413955918 [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["log_output"] for tags: ["observability", "cert"] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["observability", "cert"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["observability", "cert"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["log_output"] for tags: ["observability", "cert"] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["observability", "cert"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["observability", "cert"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1100, total tasks passed: 11 for tags: ["essential"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-07-10 11:57:21] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-07-10 11:57:21] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 300, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:57:21] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:57:21] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:57:21] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 100} [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-07-10 11:57:21] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:57:21] INFO -- CNTI-Setup.install_cluster_tools: Installing cluster_tools on the cluster [2025-07-10 11:57:21] INFO -- CNTI: ClusterTools install [2025-07-10 11:57:21] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource namespaces [2025-07-10 11:57:22] DEBUG -- CNTI: ClusterTools ensure_namespace_exists namespace_array: [{"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-07-10T11:53:16Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-default", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-default", "resourceVersion" => "4607934", "uid" => "a28fbc48-19a0-42c8-aefd-0afd21dd384f"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-07-10T11:53:02Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-testsuite", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-testsuite", "resourceVersion" => "4607738", "uid" => "b2fa8847-c70b-45c1-b28a-8795d685c60d"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "default"}, "name" => "default", "resourceVersion" => "20", "uid" => "e9c1e555-d421-479b-99da-e15b9e4cbe23"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "kube-node-lease"}, "name" => "kube-node-lease", "resourceVersion" => "27", "uid" => "142e6851-3c72-4ad7-80c0-9a06f7ec29a7"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "kube-public"}, "name" => "kube-public", "resourceVersion" => "12", "uid" => "6d5c8018-e87c-4144-ad35-175aba623785"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "kube-system"}, "name" => "kube-system", "resourceVersion" => "5", "uid" => "3623b17d-eebf-47da-aa4d-8431d7e16dcc"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"litmus\"}}\n"}, "creationTimestamp" => "2025-07-10T11:53:50Z", "labels" => {"kubernetes.io/metadata.name" => "litmus", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "litmus", "resourceVersion" => "4608101", "uid" => "ca11e5c7-f3c2-484e-b38d-99c615a016ea"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"local-path-storage\"}}\n"}, "creationTimestamp" => "2025-06-10T13:24:34Z", "labels" => {"kubernetes.io/metadata.name" => "local-path-storage"}, "name" => "local-path-storage", "resourceVersion" => "323", "uid" => "5e7cc0e6-da8e-4014-a795-6c248d992a2a"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}] [2025-07-10 11:57:22] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file cluster_tools.yml [2025-07-10 11:57:22] INFO -- CNTI: ClusterTools wait_for_cluster_tools [2025-07-10 11:57:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource namespaces [2025-07-10 11:57:22] DEBUG -- CNTI: ClusterTools ensure_namespace_exists namespace_array: [{"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-07-10T11:53:16Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-default", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-default", "resourceVersion" => "4607934", "uid" => "a28fbc48-19a0-42c8-aefd-0afd21dd384f"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-07-10T11:53:02Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-testsuite", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-testsuite", "resourceVersion" => "4607738", "uid" => "b2fa8847-c70b-45c1-b28a-8795d685c60d"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "default"}, "name" => "default", "resourceVersion" => "20", "uid" => "e9c1e555-d421-479b-99da-e15b9e4cbe23"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "kube-node-lease"}, "name" => "kube-node-lease", "resourceVersion" => "27", "uid" => "142e6851-3c72-4ad7-80c0-9a06f7ec29a7"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "kube-public"}, "name" => "kube-public", "resourceVersion" => "12", "uid" => "6d5c8018-e87c-4144-ad35-175aba623785"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:24:29Z", "labels" => {"kubernetes.io/metadata.name" => "kube-system"}, "name" => "kube-system", "resourceVersion" => "5", "uid" => "3623b17d-eebf-47da-aa4d-8431d7e16dcc"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"litmus\"}}\n"}, "creationTimestamp" => "2025-07-10T11:53:50Z", "labels" => {"kubernetes.io/metadata.name" => "litmus", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "litmus", "resourceVersion" => "4608101", "uid" => "ca11e5c7-f3c2-484e-b38d-99c615a016ea"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"local-path-storage\"}}\n"}, "creationTimestamp" => "2025-06-10T13:24:34Z", "labels" => {"kubernetes.io/metadata.name" => "local-path-storage"}, "name" => "local-path-storage", "resourceVersion" => "323", "uid" => "5e7cc0e6-da8e-4014-a795-6c248d992a2a"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}] [2025-07-10 11:57:22] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Waiting for resource Daemonset/cluster-tools to install [2025-07-10 11:57:22] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-07-10 11:57:22] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-07-10 11:57:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-07-10 11:57:22] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Daemonset/cluster-tools is ready [2025-07-10 11:57:22] INFO -- CNTI-Setup.install_cluster_tools: cluster_tools has been installed on the cluster [2025-07-10 11:57:22] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:22] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:57:22] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:22] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:22] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:57:22] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:57:22] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:57:22] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:22] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [specialized_init_system] [2025-07-10 11:57:22] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:22] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:22] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:57:22] INFO -- CNTI-CNFManager.Task.task_runner.specialized_init_system: Starting test [2025-07-10 11:57:22] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-07-10 11:57:22] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:57:22] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:57:22] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:57:22] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:57:22] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-07-10 11:57:22] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-07-10 11:57:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:22] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-07-10 11:57:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:22] INFO -- CNTI-specialized_init_system: Checking resource Deployment/coredns-coredns in cnf-default [2025-07-10 11:57:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:22] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-07-10 11:57:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:23] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-07-10 11:57:23] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:23] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-07-10 11:57:23] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:23] INFO -- CNTI-specialized_init_system: Pod count for resource Deployment/coredns-coredns in cnf-default: 1 [2025-07-10 11:57:23] INFO -- CNTI-specialized_init_system: Inspecting pod: {"apiVersion" => "v1", "kind" => "Pod", "metadata" => {"annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}, "creationTimestamp" => "2025-07-10T11:53:32Z", "generateName" => "coredns-coredns-64fc886fd4-", "labels" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns", "pod-template-hash" => "64fc886fd4"}, "name" => "coredns-coredns-64fc886fd4-8ssnr", "namespace" => "cnf-default", "ownerReferences" => [{"apiVersion" => "apps/v1", "blockOwnerDeletion" => true, "controller" => true, "kind" => "ReplicaSet", "name" => "coredns-coredns-64fc886fd4", "uid" => "50ffde37-6976-4ccf-ae0b-f9ffa16501e8"}], "resourceVersion" => "4608063", "uid" => "cba19cd1-0b27-4e2d-ba91-01b94b523f66"}, "spec" => {"containers" => [{"args" => ["-conf", "/etc/coredns/Corefile"], "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "livenessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "name" => "coredns", "ports" => [{"containerPort" => 53, "name" => "udp-53", "protocol" => "UDP"}, {"containerPort" => 53, "name" => "tcp-53", "protocol" => "TCP"}], "readinessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "terminationMessagePath" => "/dev/termination-log", "terminationMessagePolicy" => "File", "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}, {"mountPath" => "/var/run/secrets/kubernetes.io/serviceaccount", "name" => "kube-api-access-zssnk", "readOnly" => true}]}], "dnsPolicy" => "Default", "enableServiceLinks" => true, "nodeName" => "v132-worker", "preemptionPolicy" => "PreemptLowerPriority", "priority" => 0, "restartPolicy" => "Always", "schedulerName" => "default-scheduler", "securityContext" => {}, "serviceAccount" => "default", "serviceAccountName" => "default", "terminationGracePeriodSeconds" => 30, "tolerations" => [{"effect" => "NoExecute", "key" => "node.kubernetes.io/not-ready", "operator" => "Exists", "tolerationSeconds" => 300}, {"effect" => "NoExecute", "key" => "node.kubernetes.io/unreachable", "operator" => "Exists", "tolerationSeconds" => 300}], "volumes" => [{"configMap" => {"defaultMode" => 420, "items" => [{"key" => "Corefile", "path" => "Corefile"}], "name" => "coredns-coredns"}, "name" => "config-volume"}, {"name" => "kube-api-access-zssnk", "projected" => {"defaultMode" => 420, "sources" => [{"serviceAccountToken" => {"expirationSeconds" => 3607, "path" => "token"}}, {"configMap" => {"items" => [{"key" => "ca.crt", "path" => "ca.crt"}], "name" => "kube-root-ca.crt"}}, {"downwardAPI" => {"items" => [{"fieldRef" => {"apiVersion" => "v1", "fieldPath" => "metadata.namespace"}, "path" => "namespace"}]}}]}}]}, "status" => {"conditions" => [{"lastProbeTime" => nil, "lastTransitionTime" => "2025-07-10T11:53:35Z", "status" => "True", "type" => "PodReadyToStartContainers"}, {"lastProbeTime" => nil, "lastTransitionTime" => "2025-07-10T11:53:32Z", "status" => "True", "type" => "Initialized"}, {"lastProbeTime" => nil, "lastTransitionTime" => "2025-07-10T11:53:46Z", "status" => "True", "type" => "Ready"}, {"lastProbeTime" => nil, "lastTransitionTime" => "2025-07-10T11:53:46Z", "status" => "True", "type" => "ContainersReady"}, {"lastProbeTime" => nil, "lastTransitionTime" => "2025-07-10T11:53:32Z", "status" => "True", "type" => "PodScheduled"}], "containerStatuses" => [{"containerID" => "containerd://9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "image" => "docker.io/coredns/coredns:1.7.1", "imageID" => "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "lastState" => {}, "name" => "coredns", "ready" => true, "restartCount" => 0, "started" => true, "state" => {"running" => {"startedAt" => "2025-07-10T11:53:35Z"}}, "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}, {"mountPath" => "/var/run/secrets/kubernetes.io/serviceaccount", "name" => "kube-api-access-zssnk", "readOnly" => true, "recursiveReadOnly" => "Disabled"}]}], "hostIP" => "172.24.0.8", "hostIPs" => [{"ip" => "172.24.0.8"}], "phase" => "Running", "podIP" => "10.244.1.247", "podIPs" => [{"ip" => "10.244.1.247"}], "qosClass" => "Guaranteed", "startTime" => "2025-07-10T11:53:32Z"}} [2025-07-10 11:57:23] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:23] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-07-10 11:57:23] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-64fc886fd4-8ssnr list: v132-worker [2025-07-10 11:57:23] INFO -- CNTI: parse_container_id container_id: containerd://9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:23] INFO -- CNTI: node_pid_by_container_id container_id: 9dafdd1303f9d3 [2025-07-10 11:57:23] INFO -- CNTI: parse_container_id container_id: 9dafdd1303f9d3 [2025-07-10 11:57:23] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:23] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:23] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:23] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:23] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:23] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:23] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-07-10T11:57:23Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-07-10T11:57:23Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-07-10 11:57:23] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.155.214:53\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.155.214:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.155.214:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/0.log\",\n \"metadata\": {\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34\"\n }\n ]\n },\n \"pid\": 2430347,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-podcba19cd1_0b27_4e2d_ba91_01b94b523f66.slice:cri-containerd:9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/2430319/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/2430319/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/2430319/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-64fc886fd4-8ssnr\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.155.214\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.155.214:53\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.155.214\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_SERVICE_PORT=443\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.155.214\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT=udp://10.96.155.214:53\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.155.214:53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7\",\n \"snapshotKey\": \"9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-07-10T11:53:33.695668221Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-8ssnr_cba19cd1-0b27-4e2d-ba91-01b94b523f66/coredns/0.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 0,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-07-10T11:53:35.40234132Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-07-10T11:57:23Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-07-10T11:57:23Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-07-10 11:57:23] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.155.214" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.155.214:53" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.155.214" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.155.214" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.155.214:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.155.214:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1 }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/0.log", "metadata": { "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34" } ] }, "pid": 2430347, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7", "io.kubernetes.cri.sandbox-name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-podcba19cd1_0b27_4e2d_ba91_01b94b523f66.slice:cri-containerd:9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/2430319/ns/ipc", "type": "ipc" }, { "path": "/proc/2430319/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/2430319/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-64fc886fd4-8ssnr", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.155.214", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.155.214:53", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_HOST=10.96.155.214", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.155.214", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT=tcp://10.96.0.1:443", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT=udp://10.96.155.214:53", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.155.214:53", "COREDNS_COREDNS_PORT_53_UDP_PORT=53" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7", "snapshotKey": "9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-07-10T11:53:33.695668221Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-8ssnr_cba19cd1-0b27-4e2d-ba91-01b94b523f66/coredns/0.log", "message": "", "metadata": { "attempt": 0, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-07-10T11:53:35.40234132Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-07-10 11:57:23] INFO -- CNTI: node_pid_by_container_id pid: 2430347 [2025-07-10 11:57:23] INFO -- CNTI: cmdline_by_pid [2025-07-10 11:57:23] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:23] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:23] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:24] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:24] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:24] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs pod/coredns-coredns-64fc886fd4-8ssnr has container 'coredns' with /coredns as init process ✖️ 🏆FAILED: [specialized_init_system] Containers do not use specialized init systems (ভ_ভ) ރ 🚀 [2025-07-10 11:57:24] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-07-10 11:57:24] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-07-10 11:57:24] INFO -- CNTI-InitSystems.scan: pod/coredns-coredns-64fc886fd4-8ssnr has container 'coredns' with /coredns as init process [2025-07-10 11:57:24] INFO -- CNTI-specialized_init_system: Pod scan result: [InitSystems::InitSystemInfo(@kind="pod", @namespace="cnf-default", @name="coredns-coredns-64fc886fd4-8ssnr", @container="coredns", @init_cmd="/coredns")] [2025-07-10 11:57:24] DEBUG -- CNTI-CNFManager.workload_resource_test: Container result: [2025-07-10 11:57:24] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-07-10 11:57:24] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-07-10 11:57:24] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'specialized_init_system' emoji: 🚀 [2025-07-10 11:57:24] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'specialized_init_system' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:24] DEBUG -- CNTI-CNFManager.Points: Task: 'specialized_init_system' type: essential [2025-07-10 11:57:24] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 0 points [2025-07-10 11:57:24] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'specialized_init_system' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:24] DEBUG -- CNTI-CNFManager.Points: Task: 'specialized_init_system' type: essential [2025-07-10 11:57:24] DEBUG -- CNTI-CNFManager.Points.upsert_task-specialized_init_system: Task start time: 2025-07-10 11:57:22 UTC, end time: 2025-07-10 11:57:24 UTC [2025-07-10 11:57:24] INFO -- CNTI-CNFManager.Points.upsert_task-specialized_init_system: Task: 'specialized_init_system' has status: 'failed' and is awarded: 0 points.Runtime: 00:00:01.782301351 [2025-07-10 11:57:24] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:24] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:57:24] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:24] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:24] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:57:24] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:57:24] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:57:24] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:24] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [single_process_type] [2025-07-10 11:57:24] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:24] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:24] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:57:24] INFO -- CNTI-CNFManager.Task.task_runner.single_process_type: Starting test [2025-07-10 11:57:24] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:57:24] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:24] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:57:24] INFO -- CNTI: Constructed resource_named_tuple: {kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"} [2025-07-10 11:57:24] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:24] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-07-10 11:57:24] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:24] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-07-10 11:57:24] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:24] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-07-10 11:57:24] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:24] INFO -- CNTI: pod_name: coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:24] INFO -- CNTI: container_statuses: [{"containerID" => "containerd://9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "image" => "docker.io/coredns/coredns:1.7.1", "imageID" => "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "lastState" => {}, "name" => "coredns", "ready" => true, "restartCount" => 0, "started" => true, "state" => {"running" => {"startedAt" => "2025-07-10T11:53:35Z"}}, "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}, {"mountPath" => "/var/run/secrets/kubernetes.io/serviceaccount", "name" => "kube-api-access-zssnk", "readOnly" => true, "recursiveReadOnly" => "Disabled"}]}] [2025-07-10 11:57:24] INFO -- CNTI: pod_name: coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:24] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:24] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-07-10 11:57:24] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-64fc886fd4-8ssnr list: v132-worker [2025-07-10 11:57:24] INFO -- CNTI: nodes_by_resource done [2025-07-10 11:57:24] INFO -- CNTI: before ready containerStatuses container_id 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:24] INFO -- CNTI: containerStatuses container_id 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:24] INFO -- CNTI: node_pid_by_container_id container_id: 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:24] INFO -- CNTI: parse_container_id container_id: 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:24] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:24] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:24] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:25] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:25] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:25] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:25] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-07-10T11:57:25Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-07-10T11:57:25Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-07-10 11:57:25] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.155.214:53\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.155.214:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.155.214:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/0.log\",\n \"metadata\": {\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34\"\n }\n ]\n },\n \"pid\": 2430347,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-podcba19cd1_0b27_4e2d_ba91_01b94b523f66.slice:cri-containerd:9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/2430319/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/2430319/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/2430319/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-64fc886fd4-8ssnr\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.155.214\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.155.214:53\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.155.214\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_SERVICE_PORT=443\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.155.214\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT=udp://10.96.155.214:53\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.155.214:53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7\",\n \"snapshotKey\": \"9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-07-10T11:53:33.695668221Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-8ssnr_cba19cd1-0b27-4e2d-ba91-01b94b523f66/coredns/0.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 0,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-07-10T11:53:35.40234132Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-07-10T11:57:25Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-07-10T11:57:25Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-07-10 11:57:25] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.155.214" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.155.214:53" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.155.214" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.155.214" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.155.214:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.155.214:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1 }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/0.log", "metadata": { "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34" } ] }, "pid": 2430347, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7", "io.kubernetes.cri.sandbox-name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-podcba19cd1_0b27_4e2d_ba91_01b94b523f66.slice:cri-containerd:9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/2430319/ns/ipc", "type": "ipc" }, { "path": "/proc/2430319/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/2430319/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-64fc886fd4-8ssnr", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.155.214", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.155.214:53", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_HOST=10.96.155.214", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.155.214", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT=tcp://10.96.0.1:443", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT=udp://10.96.155.214:53", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.155.214:53", "COREDNS_COREDNS_PORT_53_UDP_PORT=53" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7", "snapshotKey": "9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-07-10T11:53:33.695668221Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-8ssnr_cba19cd1-0b27-4e2d-ba91-01b94b523f66/coredns/0.log", "message": "", "metadata": { "attempt": 0, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-07-10T11:53:35.40234132Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-07-10 11:57:25] INFO -- CNTI: node_pid_by_container_id pid: 2430347 [2025-07-10 11:57:25] INFO -- CNTI: node pid (should never be pid 1): 2430347 [2025-07-10 11:57:25] INFO -- CNTI: node name : v132-worker [2025-07-10 11:57:25] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:25] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:25] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:25] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:25] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:25] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "2430347\n", error: ""} [2025-07-10 11:57:25] INFO -- CNTI: parsed pids: ["2430347"] [2025-07-10 11:57:25] INFO -- CNTI: all_statuses_by_pids [2025-07-10 11:57:25] INFO -- CNTI: all_statuses_by_pids pid: 2430347 [2025-07-10 11:57:25] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:25] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:25] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:25] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:25] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:26] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430347\nNgid:\t0\nPid:\t2430347\nPPid:\t2430296\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430347\t1\nNSpid:\t2430347\t1\nNSpgid:\t2430347\t1\nNSsid:\t2430347\t1\nVmPeak:\t 748748 kB\nVmSize:\t 748748 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 41200 kB\nVmRSS:\t 41200 kB\nRssAnon:\t 13000 kB\nRssFile:\t 28200 kB\nRssShmem:\t 0 kB\nVmData:\t 108936 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 200 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t3/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1710\nnonvoluntary_ctxt_switches:\t15\n", error: ""} [2025-07-10 11:57:26] DEBUG -- CNTI: proc process_statuses_by_node: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430347\nNgid:\t0\nPid:\t2430347\nPPid:\t2430296\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430347\t1\nNSpid:\t2430347\t1\nNSpgid:\t2430347\t1\nNSsid:\t2430347\t1\nVmPeak:\t 748748 kB\nVmSize:\t 748748 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 41200 kB\nVmRSS:\t 41200 kB\nRssAnon:\t 13000 kB\nRssFile:\t 28200 kB\nRssShmem:\t 0 kB\nVmData:\t 108936 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 200 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t3/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1710\nnonvoluntary_ctxt_switches:\t15\n"] [2025-07-10 11:57:26] INFO -- CNTI-proctree_by_pid: proctree_by_pid potential_parent_pid: 2430347 [2025-07-10 11:57:26] DEBUG -- CNTI-proctree_by_pid: proc_statuses: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430347\nNgid:\t0\nPid:\t2430347\nPPid:\t2430296\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430347\t1\nNSpid:\t2430347\t1\nNSpgid:\t2430347\t1\nNSsid:\t2430347\t1\nVmPeak:\t 748748 kB\nVmSize:\t 748748 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 41200 kB\nVmRSS:\t 41200 kB\nRssAnon:\t 13000 kB\nRssFile:\t 28200 kB\nRssShmem:\t 0 kB\nVmData:\t 108936 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 200 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t3/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1710\nnonvoluntary_ctxt_switches:\t15\n"] [2025-07-10 11:57:26] DEBUG -- CNTI: parse_status status_output: Name: coredns Umask: 0022 State: S (sleeping) Tgid: 2430347 Ngid: 0 Pid: 2430347 PPid: 2430296 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 2430347 1 NSpid: 2430347 1 NSpgid: 2430347 1 NSsid: 2430347 1 VmPeak: 748748 kB VmSize: 748748 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 41200 kB VmRSS: 41200 kB RssAnon: 13000 kB RssFile: 28200 kB RssShmem: 0 kB VmData: 108936 kB VmStk: 132 kB VmExe: 22032 kB VmLib: 8 kB VmPTE: 200 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 23 SigQ: 3/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffe7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 1710 nonvoluntary_ctxt_switches: 15 [2025-07-10 11:57:26] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2430347", "Ngid" => "0", "Pid" => "2430347", "PPid" => "2430296", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "2430347\t1", "NSpid" => "2430347\t1", "NSpgid" => "2430347\t1", "NSsid" => "2430347\t1", "VmPeak" => "748748 kB", "VmSize" => "748748 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "41200 kB", "VmRSS" => "41200 kB", "RssAnon" => "13000 kB", "RssFile" => "28200 kB", "RssShmem" => "0 kB", "VmData" => "108936 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "200 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "23", "SigQ" => "3/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "1710", "nonvoluntary_ctxt_switches" => "15"} [2025-07-10 11:57:26] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:57:26] INFO -- CNTI: cmdline_by_pid [2025-07-10 11:57:26] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:26] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:26] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:26] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:26] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs ✔️ 🏆PASSED: [single_process_type] Only one process type used ⚖👀 [2025-07-10 11:57:26] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-07-10 11:57:26] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-07-10 11:57:26] DEBUG -- CNTI-proctree_by_pid: current_pid == potential_parent_pid [2025-07-10 11:57:26] DEBUG -- CNTI-proctree_by_pid: proctree: [{"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2430347", "Ngid" => "0", "Pid" => "2430347", "PPid" => "2430296", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "2430347\t1", "NSpid" => "2430347\t1", "NSpgid" => "2430347\t1", "NSsid" => "2430347\t1", "VmPeak" => "748748 kB", "VmSize" => "748748 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "41200 kB", "VmRSS" => "41200 kB", "RssAnon" => "13000 kB", "RssFile" => "28200 kB", "RssShmem" => "0 kB", "VmData" => "108936 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "200 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "23", "SigQ" => "3/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "1710", "nonvoluntary_ctxt_switches" => "15", "cmdline" => "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000"}] [2025-07-10 11:57:26] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:57:26] INFO -- CNTI-single_process_type: status name: coredns [2025-07-10 11:57:26] INFO -- CNTI-single_process_type: previous status name: initial_name [2025-07-10 11:57:26] INFO -- CNTI: container_status_result.all?(true): false [2025-07-10 11:57:26] INFO -- CNTI: pod_resp.all?(true): false [2025-07-10 11:57:26] INFO -- CNTI: Constructed resource_named_tuple: {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"} [2025-07-10 11:57:26] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'single_process_type' emoji: ⚖👀 [2025-07-10 11:57:26] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'single_process_type' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:26] DEBUG -- CNTI-CNFManager.Points: Task: 'single_process_type' type: essential [2025-07-10 11:57:26] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-07-10 11:57:26] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'single_process_type' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:57:26] DEBUG -- CNTI-CNFManager.Points: Task: 'single_process_type' type: essential [2025-07-10 11:57:26] DEBUG -- CNTI-CNFManager.Points.upsert_task-single_process_type: Task start time: 2025-07-10 11:57:24 UTC, end time: 2025-07-10 11:57:26 UTC [2025-07-10 11:57:26] INFO -- CNTI-CNFManager.Points.upsert_task-single_process_type: Task: 'single_process_type' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:02.180978914 [2025-07-10 11:57:26] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:26] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:57:26] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:26] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:26] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:57:26] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:57:26] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:57:26] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:57:26] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [zombie_handled] [2025-07-10 11:57:26] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:57:26] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:57:26] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:57:26] INFO -- CNTI-CNFManager.Task.task_runner.zombie_handled: Starting test [2025-07-10 11:57:26] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-07-10 11:57:26] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:57:26] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:57:26] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:57:26] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:57:26] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-07-10 11:57:26] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-07-10 11:57:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:26] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-07-10 11:57:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:26] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-07-10 11:57:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:27] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-07-10 11:57:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:27] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-07-10 11:57:27] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:27] INFO -- CNTI: pod_name: coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:27] INFO -- CNTI: container_statuses: [{"containerID" => "containerd://9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "image" => "docker.io/coredns/coredns:1.7.1", "imageID" => "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "lastState" => {}, "name" => "coredns", "ready" => true, "restartCount" => 0, "started" => true, "state" => {"running" => {"startedAt" => "2025-07-10T11:53:35Z"}}, "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}, {"mountPath" => "/var/run/secrets/kubernetes.io/serviceaccount", "name" => "kube-api-access-zssnk", "readOnly" => true, "recursiveReadOnly" => "Disabled"}]}] [2025-07-10 11:57:27] INFO -- CNTI: pod_name: coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:27] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-07-10 11:57:27] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-64fc886fd4-8ssnr list: v132-worker [2025-07-10 11:57:27] INFO -- CNTI: nodes_by_resource done [2025-07-10 11:57:27] INFO -- CNTI: before ready containerStatuses container_id 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:27] INFO -- CNTI: containerStatuses container_id 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:27] INFO -- CNTI: node_pid_by_container_id container_id: 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:27] INFO -- CNTI: parse_container_id container_id: 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:27] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:27] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:27] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:27] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:27] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:27] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-07-10T11:57:27Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-07-10T11:57:27Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-07-10 11:57:27] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.155.214:53\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.155.214:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.155.214:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/0.log\",\n \"metadata\": {\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34\"\n }\n ]\n },\n \"pid\": 2430347,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-podcba19cd1_0b27_4e2d_ba91_01b94b523f66.slice:cri-containerd:9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/2430319/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/2430319/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/2430319/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-64fc886fd4-8ssnr\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.155.214\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.155.214:53\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.155.214\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_SERVICE_PORT=443\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.155.214\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT=udp://10.96.155.214:53\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.155.214:53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7\",\n \"snapshotKey\": \"9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-07-10T11:53:33.695668221Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-8ssnr_cba19cd1-0b27-4e2d-ba91-01b94b523f66/coredns/0.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 0,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-07-10T11:53:35.40234132Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-07-10T11:57:27Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-07-10T11:57:27Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-07-10 11:57:27] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.155.214" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.155.214:53" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.155.214" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.155.214" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.155.214:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.155.214:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1 }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/0.log", "metadata": { "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34" } ] }, "pid": 2430347, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7", "io.kubernetes.cri.sandbox-name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-podcba19cd1_0b27_4e2d_ba91_01b94b523f66.slice:cri-containerd:9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/2430319/ns/ipc", "type": "ipc" }, { "path": "/proc/2430319/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/2430319/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-64fc886fd4-8ssnr", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.155.214", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.155.214:53", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_HOST=10.96.155.214", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.155.214", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT=tcp://10.96.0.1:443", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT=udp://10.96.155.214:53", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.155.214:53", "COREDNS_COREDNS_PORT_53_UDP_PORT=53" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7", "snapshotKey": "9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-07-10T11:53:33.695668221Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-8ssnr_cba19cd1-0b27-4e2d-ba91-01b94b523f66/coredns/0.log", "message": "", "metadata": { "attempt": 0, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-07-10T11:53:35.40234132Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-07-10 11:57:27] INFO -- CNTI: node_pid_by_container_id pid: 2430347 [2025-07-10 11:57:27] INFO -- CNTI: node pid (should never be pid 1): 2430347 [2025-07-10 11:57:27] INFO -- CNTI: node name : v132-worker [2025-07-10 11:57:27] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:27] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:27] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:27] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:27] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:28] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-07-10 11:57:28] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:28] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:28] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:28] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:28] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:28] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:28] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-07-10 11:57:28] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:28] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:28] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:28] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:28] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:28] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:31] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Sleeping...\n", error: ""} [2025-07-10 11:57:31] INFO -- CNTI: container_status_result.all?(true): false [2025-07-10 11:57:31] INFO -- CNTI: pod_resp.all?(true): false [2025-07-10 11:57:31] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-07-10 11:57:31] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: false [2025-07-10 11:57:41] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-07-10 11:57:41] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:57:41] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:57:41] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:57:41] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:57:41] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-07-10 11:57:41] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-07-10 11:57:41] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:41] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-07-10 11:57:41] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:41] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:41] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-07-10 11:57:41] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:41] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-07-10 11:57:41] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:57:41] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-07-10 11:57:41] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:41] INFO -- CNTI: pod_name: coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:41] INFO -- CNTI: container_statuses: [{"containerID" => "containerd://9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "image" => "docker.io/coredns/coredns:1.7.1", "imageID" => "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "lastState" => {}, "name" => "coredns", "ready" => true, "restartCount" => 0, "started" => true, "state" => {"running" => {"startedAt" => "2025-07-10T11:53:35Z"}}, "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}, {"mountPath" => "/var/run/secrets/kubernetes.io/serviceaccount", "name" => "kube-api-access-zssnk", "readOnly" => true, "recursiveReadOnly" => "Disabled"}]}] [2025-07-10 11:57:41] INFO -- CNTI: pod_name: coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:41] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:57:41] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-07-10 11:57:42] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-64fc886fd4-8ssnr list: v132-worker [2025-07-10 11:57:42] INFO -- CNTI: nodes_by_resource done [2025-07-10 11:57:42] INFO -- CNTI: before ready containerStatuses container_id 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:42] INFO -- CNTI: containerStatuses container_id 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:42] INFO -- CNTI: node_pid_by_container_id container_id: 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:42] INFO -- CNTI: parse_container_id container_id: 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:42] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:42] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:42] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:42] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:42] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:42] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:42] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-07-10T11:57:42Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-07-10T11:57:42Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-07-10 11:57:42] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.155.214:53\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.155.214:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.155.214:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/0.log\",\n \"metadata\": {\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34\"\n }\n ]\n },\n \"pid\": 2430347,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-podcba19cd1_0b27_4e2d_ba91_01b94b523f66.slice:cri-containerd:9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/2430319/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/2430319/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/2430319/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-64fc886fd4-8ssnr\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.155.214\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.155.214:53\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.155.214\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_SERVICE_PORT=443\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.155.214\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT=udp://10.96.155.214:53\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.155.214:53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7\",\n \"snapshotKey\": \"9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-07-10T11:53:33.695668221Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-8ssnr_cba19cd1-0b27-4e2d-ba91-01b94b523f66/coredns/0.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 0,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-07-10T11:53:35.40234132Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-07-10T11:57:42Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-07-10T11:57:42Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-07-10 11:57:42] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.155.214" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.155.214:53" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.155.214" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.155.214" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.155.214:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.155.214:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1 }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/0.log", "metadata": { "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34" } ] }, "pid": 2430347, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7", "io.kubernetes.cri.sandbox-name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-podcba19cd1_0b27_4e2d_ba91_01b94b523f66.slice:cri-containerd:9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/2430319/ns/ipc", "type": "ipc" }, { "path": "/proc/2430319/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/2430319/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-64fc886fd4-8ssnr", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.155.214", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.155.214:53", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_HOST=10.96.155.214", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.155.214", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT=tcp://10.96.0.1:443", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT=udp://10.96.155.214:53", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.155.214:53", "COREDNS_COREDNS_PORT_53_UDP_PORT=53" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7", "snapshotKey": "9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-07-10T11:53:33.695668221Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-8ssnr_cba19cd1-0b27-4e2d-ba91-01b94b523f66/coredns/0.log", "message": "", "metadata": { "attempt": 0, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/414a0b34", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-07-10T11:53:35.40234132Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-07-10 11:57:42] INFO -- CNTI: node_pid_by_container_id pid: 2430347 [2025-07-10 11:57:42] INFO -- CNTI: node pid (should never be pid 1): 2430347 [2025-07-10 11:57:42] INFO -- CNTI: node name : v132-worker [2025-07-10 11:57:42] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:42] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:42] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:42] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:42] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:42] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:42] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "2430347\n2431758\n", error: ""} [2025-07-10 11:57:42] INFO -- CNTI: parsed pids: ["2430347", "2431758"] [2025-07-10 11:57:42] INFO -- CNTI: all_statuses_by_pids [2025-07-10 11:57:42] INFO -- CNTI: all_statuses_by_pids pid: 2430347 [2025-07-10 11:57:42] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:42] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:42] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:43] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:43] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:43] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:43] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430347\nNgid:\t0\nPid:\t2430347\nPPid:\t2430296\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430347\t1\nNSpid:\t2430347\t1\nNSpgid:\t2430347\t1\nNSsid:\t2430347\t1\nVmPeak:\t 748748 kB\nVmSize:\t 748748 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 41436 kB\nVmRSS:\t 41436 kB\nRssAnon:\t 13236 kB\nRssFile:\t 28200 kB\nRssShmem:\t 0 kB\nVmData:\t 108936 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 200 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1817\nnonvoluntary_ctxt_switches:\t15\n", error: ""} [2025-07-10 11:57:43] INFO -- CNTI: all_statuses_by_pids pid: 2431758 [2025-07-10 11:57:43] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:43] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:43] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:43] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:43] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:43] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:43] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsleep\nState:\tZ (zombie)\nTgid:\t2431758\nNgid:\t0\nPid:\t2431758\nPPid:\t2430347\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t0\nGroups:\t0 \nNStgid:\t2431758\t42\nNSpid:\t2431758\t42\nNSpgid:\t2431752\t36\nNSsid:\t2431752\t36\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000001000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-07-10 11:57:43] DEBUG -- CNTI: proc process_statuses_by_node: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430347\nNgid:\t0\nPid:\t2430347\nPPid:\t2430296\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430347\t1\nNSpid:\t2430347\t1\nNSpgid:\t2430347\t1\nNSsid:\t2430347\t1\nVmPeak:\t 748748 kB\nVmSize:\t 748748 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 41436 kB\nVmRSS:\t 41436 kB\nRssAnon:\t 13236 kB\nRssFile:\t 28200 kB\nRssShmem:\t 0 kB\nVmData:\t 108936 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 200 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1817\nnonvoluntary_ctxt_switches:\t15\n", "Name:\tsleep\nState:\tZ (zombie)\nTgid:\t2431758\nNgid:\t0\nPid:\t2431758\nPPid:\t2430347\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t0\nGroups:\t0 \nNStgid:\t2431758\t42\nNSpid:\t2431758\t42\nNSpgid:\t2431752\t36\nNSsid:\t2431752\t36\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000001000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2\nnonvoluntary_ctxt_switches:\t0\n"] [2025-07-10 11:57:43] INFO -- CNTI-proctree_by_pid: proctree_by_pid potential_parent_pid: 2430347 [2025-07-10 11:57:43] DEBUG -- CNTI-proctree_by_pid: proc_statuses: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430347\nNgid:\t0\nPid:\t2430347\nPPid:\t2430296\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430347\t1\nNSpid:\t2430347\t1\nNSpgid:\t2430347\t1\nNSsid:\t2430347\t1\nVmPeak:\t 748748 kB\nVmSize:\t 748748 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 41436 kB\nVmRSS:\t 41436 kB\nRssAnon:\t 13236 kB\nRssFile:\t 28200 kB\nRssShmem:\t 0 kB\nVmData:\t 108936 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 200 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1817\nnonvoluntary_ctxt_switches:\t15\n", "Name:\tsleep\nState:\tZ (zombie)\nTgid:\t2431758\nNgid:\t0\nPid:\t2431758\nPPid:\t2430347\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t0\nGroups:\t0 \nNStgid:\t2431758\t42\nNSpid:\t2431758\t42\nNSpgid:\t2431752\t36\nNSsid:\t2431752\t36\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000001000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2\nnonvoluntary_ctxt_switches:\t0\n"] [2025-07-10 11:57:43] DEBUG -- CNTI: parse_status status_output: Name: coredns Umask: 0022 State: S (sleeping) Tgid: 2430347 Ngid: 0 Pid: 2430347 PPid: 2430296 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 2430347 1 NSpid: 2430347 1 NSpgid: 2430347 1 NSsid: 2430347 1 VmPeak: 748748 kB VmSize: 748748 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 41436 kB VmRSS: 41436 kB RssAnon: 13236 kB RssFile: 28200 kB RssShmem: 0 kB VmData: 108936 kB VmStk: 132 kB VmExe: 22032 kB VmLib: 8 kB VmPTE: 200 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 23 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffe7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 1817 nonvoluntary_ctxt_switches: 15 [2025-07-10 11:57:43] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2430347", "Ngid" => "0", "Pid" => "2430347", "PPid" => "2430296", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "2430347\t1", "NSpid" => "2430347\t1", "NSpgid" => "2430347\t1", "NSsid" => "2430347\t1", "VmPeak" => "748748 kB", "VmSize" => "748748 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "41436 kB", "VmRSS" => "41436 kB", "RssAnon" => "13236 kB", "RssFile" => "28200 kB", "RssShmem" => "0 kB", "VmData" => "108936 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "200 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "23", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "1817", "nonvoluntary_ctxt_switches" => "15"} [2025-07-10 11:57:43] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:57:43] INFO -- CNTI: cmdline_by_pid [2025-07-10 11:57:43] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:43] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:43] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:43] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:43] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:43] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:44] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-07-10 11:57:44] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: current_pid == potential_parent_pid [2025-07-10 11:57:44] DEBUG -- CNTI: parse_status status_output: Name: sleep State: Z (zombie) Tgid: 2431758 Ngid: 0 Pid: 2431758 PPid: 2430347 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 0 Groups: 0 NStgid: 2431758 42 NSpid: 2431758 42 NSpgid: 2431752 36 NSsid: 2431752 36 Threads: 1 SigQ: 4/256660 SigPnd: 0000000000001000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 2 nonvoluntary_ctxt_switches: 0 [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "2431758", "Ngid" => "0", "Pid" => "2431758", "PPid" => "2430347", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "2431758\t42", "NSpid" => "2431758\t42", "NSpgid" => "2431752\t36", "NSsid" => "2431752\t36", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0"} [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: proctree_by_pid ppid == pid && ppid != current_pid [2025-07-10 11:57:44] INFO -- CNTI: cmdline_by_pid [2025-07-10 11:57:44] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:44] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:44] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:44] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:44] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:44] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:44] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-07-10 11:57:44] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "", error: ""} [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: Matched descendent cmdline [2025-07-10 11:57:44] INFO -- CNTI-proctree_by_pid: proctree_by_pid potential_parent_pid: 2431758 [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: proc_statuses: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430347\nNgid:\t0\nPid:\t2430347\nPPid:\t2430296\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430347\t1\nNSpid:\t2430347\t1\nNSpgid:\t2430347\t1\nNSsid:\t2430347\t1\nVmPeak:\t 748748 kB\nVmSize:\t 748748 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 41436 kB\nVmRSS:\t 41436 kB\nRssAnon:\t 13236 kB\nRssFile:\t 28200 kB\nRssShmem:\t 0 kB\nVmData:\t 108936 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 200 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1817\nnonvoluntary_ctxt_switches:\t15\n", "Name:\tsleep\nState:\tZ (zombie)\nTgid:\t2431758\nNgid:\t0\nPid:\t2431758\nPPid:\t2430347\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t0\nGroups:\t0 \nNStgid:\t2431758\t42\nNSpid:\t2431758\t42\nNSpgid:\t2431752\t36\nNSsid:\t2431752\t36\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000001000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2\nnonvoluntary_ctxt_switches:\t0\n"] [2025-07-10 11:57:44] DEBUG -- CNTI: parse_status status_output: Name: coredns Umask: 0022 State: S (sleeping) Tgid: 2430347 Ngid: 0 Pid: 2430347 PPid: 2430296 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 2430347 1 NSpid: 2430347 1 NSpgid: 2430347 1 NSsid: 2430347 1 VmPeak: 748748 kB VmSize: 748748 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 41436 kB VmRSS: 41436 kB RssAnon: 13236 kB RssFile: 28200 kB RssShmem: 0 kB VmData: 108936 kB VmStk: 132 kB VmExe: 22032 kB VmLib: 8 kB VmPTE: 200 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 23 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffe7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 1817 nonvoluntary_ctxt_switches: 15 [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2430347", "Ngid" => "0", "Pid" => "2430347", "PPid" => "2430296", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "2430347\t1", "NSpid" => "2430347\t1", "NSpgid" => "2430347\t1", "NSsid" => "2430347\t1", "VmPeak" => "748748 kB", "VmSize" => "748748 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "41436 kB", "VmRSS" => "41436 kB", "RssAnon" => "13236 kB", "RssFile" => "28200 kB", "RssShmem" => "0 kB", "VmData" => "108936 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "200 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "23", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "1817", "nonvoluntary_ctxt_switches" => "15"} [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:57:44] DEBUG -- CNTI: parse_status status_output: Name: sleep State: Z (zombie) Tgid: 2431758 Ngid: 0 Pid: 2431758 PPid: 2430347 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 0 Groups: 0 NStgid: 2431758 42 NSpid: 2431758 42 NSpgid: 2431752 36 NSsid: 2431752 36 Threads: 1 SigQ: 4/256660 SigPnd: 0000000000001000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 2 nonvoluntary_ctxt_switches: 0 [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "2431758", "Ngid" => "0", "Pid" => "2431758", "PPid" => "2430347", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "2431758\t42", "NSpid" => "2431758\t42", "NSpgid" => "2431752\t36", "NSsid" => "2431752\t36", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0"} [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:57:44] INFO -- CNTI: cmdline_by_pid [2025-07-10 11:57:44] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:44] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:44] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:44] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:44] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:44] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs Process sleep in container 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 of pod coredns-coredns-64fc886fd4-8ssnr has a state of Z (zombie) [2025-07-10 11:57:44] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-07-10 11:57:44] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "", error: ""} [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: current_pid == potential_parent_pid [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: proctree: [{"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "2431758", "Ngid" => "0", "Pid" => "2431758", "PPid" => "2430347", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "2431758\t42", "NSpid" => "2431758\t42", "NSpgid" => "2431752\t36", "NSsid" => "2431752\t36", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0", "cmdline" => ""}] [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: proctree: [{"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2430347", "Ngid" => "0", "Pid" => "2430347", "PPid" => "2430296", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "2430347\t1", "NSpid" => "2430347\t1", "NSpgid" => "2430347\t1", "NSsid" => "2430347\t1", "VmPeak" => "748748 kB", "VmSize" => "748748 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "41436 kB", "VmRSS" => "41436 kB", "RssAnon" => "13236 kB", "RssFile" => "28200 kB", "RssShmem" => "0 kB", "VmData" => "108936 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "200 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "23", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "1817", "nonvoluntary_ctxt_switches" => "15", "cmdline" => "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000"}, {"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "2431758", "Ngid" => "0", "Pid" => "2431758", "PPid" => "2430347", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "2431758\t42", "NSpid" => "2431758\t42", "NSpgid" => "2431752\t36", "NSsid" => "2431752\t36", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0", "cmdline" => ""}] [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:57:44] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:57:44] DEBUG -- CNTI-zombie_handled: status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2430347", "Ngid" => "0", "Pid" => "2430347", "PPid" => "2430296", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "2430347\t1", "NSpid" => "2430347\t1", "NSpgid" => "2430347\t1", "NSsid" => "2430347\t1", "VmPeak" => "748748 kB", "VmSize" => "748748 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "41436 kB", "VmRSS" => "41436 kB", "RssAnon" => "13236 kB", "RssFile" => "28200 kB", "RssShmem" => "0 kB", "VmData" => "108936 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "200 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "23", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "1817", "nonvoluntary_ctxt_switches" => "15", "cmdline" => "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000"} [2025-07-10 11:57:44] INFO -- CNTI-zombie_handled: status cmdline: /coredns-conf/etc/coredns/Corefile [2025-07-10 11:57:44] INFO -- CNTI-zombie_handled: pid: 2430347 [2025-07-10 11:57:44] INFO -- CNTI-zombie_handled: status name: coredns [2025-07-10 11:57:44] INFO -- CNTI-zombie_handled: state: S (sleeping) [2025-07-10 11:57:44] INFO -- CNTI-zombie_handled: (state =~ /zombie/): [2025-07-10 11:57:44] DEBUG -- CNTI-zombie_handled: status: {"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "2431758", "Ngid" => "0", "Pid" => "2431758", "PPid" => "2430347", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "2431758\t42", "NSpid" => "2431758\t42", "NSpgid" => "2431752\t36", "NSsid" => "2431752\t36", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0", "cmdline" => ""} [2025-07-10 11:57:44] INFO -- CNTI-zombie_handled: status cmdline: [2025-07-10 11:57:44] INFO -- CNTI-zombie_handled: pid: 2431758 [2025-07-10 11:57:44] INFO -- CNTI-zombie_handled: status name: sleep [2025-07-10 11:57:44] INFO -- CNTI-zombie_handled: state: Z (zombie) [2025-07-10 11:57:44] INFO -- CNTI-zombie_handled: (state =~ /zombie/): 3 [2025-07-10 11:57:44] INFO -- CNTI-zombie_handled: zombies.all?(nil): false [2025-07-10 11:57:44] INFO -- CNTI: container_status_result.all?(true): false [2025-07-10 11:57:44] INFO -- CNTI: pod_resp.all?(true): false [2025-07-10 11:57:44] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-07-10 11:57:44] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: false [2025-07-10 11:57:44] INFO -- CNTI-zombie_handled: Shutting down container 9dafdd1303f9d30a3e12db4cb9f53b0f49ad9edfccfc729d35f28a7b8b8ce0c2 [2025-07-10 11:57:44] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:57:44] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:57:44] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:57:44] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:57:44] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:57:44] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:57:45] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-07-10 11:58:05] INFO -- CNTI-zombie_handled: Waiting for pod coredns-coredns-64fc886fd4-8ssnr in namespace cnf-default to become Ready... [2025-07-10 11:58:05] INFO -- CNTI-KubectlClient.wait.wait_for_resource_availability: Waiting for pod/coredns-coredns-64fc886fd4-8ssnr to be available [2025-07-10 11:58:05] INFO -- CNTI-KubectlClient.wait.wait_for_resource_availability: seconds elapsed while waiting: 0 [2025-07-10 11:58:08] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource pod/coredns-coredns-64fc886fd4-8ssnr is ready [2025-07-10 11:58:08] DEBUG -- CNTI-KubectlClient.Get.pod_status: Get status of pod/coredns-coredns-64fc886fd4-8ssnr* with field selector: [2025-07-10 11:58:08] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods ✖️ 🏆FAILED: [zombie_handled] Zombie not handled ⚖👀 [2025-07-10 11:58:08] INFO -- CNTI-KubectlClient.Get.pod_status: 'Ready' pods: coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:58:08] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'zombie_handled' emoji: ⚖👀 [2025-07-10 11:58:08] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'zombie_handled' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:58:08] DEBUG -- CNTI-CNFManager.Points: Task: 'zombie_handled' type: essential [2025-07-10 11:58:08] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 0 points [2025-07-10 11:58:08] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'zombie_handled' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:58:08] DEBUG -- CNTI-CNFManager.Points: Task: 'zombie_handled' type: essential [2025-07-10 11:58:08] DEBUG -- CNTI-CNFManager.Points.upsert_task-zombie_handled: Task start time: 2025-07-10 11:57:26 UTC, end time: 2025-07-10 11:58:08 UTC [2025-07-10 11:58:08] INFO -- CNTI-CNFManager.Points.upsert_task-zombie_handled: Task: 'zombie_handled' has status: 'failed' and is awarded: 0 points.Runtime: 00:00:41.944008165 [2025-07-10 11:58:08] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:58:08] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:58:08] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:58:08] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:58:08] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:58:08] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:58:08] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:58:08] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:58:08] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [sig_term_handled] [2025-07-10 11:58:08] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:58:08] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:58:08] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:58:08] INFO -- CNTI-CNFManager.Task.task_runner.sig_term_handled: Starting test [2025-07-10 11:58:08] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-07-10 11:58:08] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:58:08] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:58:08] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:58:08] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:58:08] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-07-10 11:58:08] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-07-10 11:58:08] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:58:08] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-07-10 11:58:08] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:58:08] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:58:08] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-07-10 11:58:08] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:09] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-07-10 11:58:09] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:58:09] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-07-10 11:58:09] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:58:09] INFO -- CNTI-KubectlClient.wait.wait_for_resource_availability: Waiting for pod/coredns-coredns-64fc886fd4-8ssnr to be available [2025-07-10 11:58:09] INFO -- CNTI-KubectlClient.wait.wait_for_resource_availability: seconds elapsed while waiting: 0 [2025-07-10 11:58:12] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource pod/coredns-coredns-64fc886fd4-8ssnr is ready [2025-07-10 11:58:12] DEBUG -- CNTI-KubectlClient.Get.pod_status: Get status of pod/coredns-coredns-64fc886fd4-8ssnr* with field selector: [2025-07-10 11:58:12] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:12] INFO -- CNTI-KubectlClient.Get.pod_status: 'Ready' pods: coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:58:12] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-64fc886fd4-8ssnr [2025-07-10 11:58:12] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-07-10 11:58:12] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-64fc886fd4-8ssnr list: v132-worker [2025-07-10 11:58:12] INFO -- CNTI: node_pid_by_container_id container_id: containerd://a6fbd161a80aa8213c4467ab17a539d61ed269a5d7b6c2c67e4c0c5ab326e5f5 [2025-07-10 11:58:12] INFO -- CNTI: parse_container_id container_id: containerd://a6fbd161a80aa8213c4467ab17a539d61ed269a5d7b6c2c67e4c0c5ab326e5f5 [2025-07-10 11:58:12] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:12] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:12] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:12] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:12] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:12] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:12] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-07-10T11:58:12Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-07-10T11:58:12Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-07-10 11:58:12] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"1\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.155.214:53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.155.214:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.155.214:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.155.214\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/1.log\",\n \"metadata\": {\n \"attempt\": 1,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/32f87504\"\n }\n ]\n },\n \"pid\": 2431990,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-podcba19cd1_0b27_4e2d_ba91_01b94b523f66.slice:cri-containerd:a6fbd161a80aa8213c4467ab17a539d61ed269a5d7b6c2c67e4c0c5ab326e5f5\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/2430319/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/2430319/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/2430319/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/32f87504\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-64fc886fd4-8ssnr\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.155.214:53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_SERVICE_PORT=443\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_PORT=udp://10.96.155.214:53\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.155.214:53\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.155.214\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.155.214\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.155.214\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7\",\n \"snapshotKey\": \"a6fbd161a80aa8213c4467ab17a539d61ed269a5d7b6c2c67e4c0c5ab326e5f5\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"1\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-07-10T11:57:45.634632038Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"a6fbd161a80aa8213c4467ab17a539d61ed269a5d7b6c2c67e4c0c5ab326e5f5\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-64fc886fd4-8ssnr\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"cba19cd1-0b27-4e2d-ba91-01b94b523f66\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-8ssnr_cba19cd1-0b27-4e2d-ba91-01b94b523f66/coredns/1.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 1,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/32f87504\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-07-10T11:57:47.19787709Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-07-10T11:58:12Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-07-10T11:58:12Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-07-10 11:58:12] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "1", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.155.214:53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.155.214:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.155.214:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.155.214" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.155.214" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.155.214" }, { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1 }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/1.log", "metadata": { "attempt": 1, "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/32f87504" } ] }, "pid": 2431990, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7", "io.kubernetes.cri.sandbox-name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-podcba19cd1_0b27_4e2d_ba91_01b94b523f66.slice:cri-containerd:a6fbd161a80aa8213c4467ab17a539d61ed269a5d7b6c2c67e4c0c5ab326e5f5", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/2430319/ns/ipc", "type": "ipc" }, { "path": "/proc/2430319/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/2430319/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/32f87504", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-64fc886fd4-8ssnr", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT_53_UDP_PORT=53", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.155.214:53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT=tcp://10.96.0.1:443", "COREDNS_COREDNS_PORT=udp://10.96.155.214:53", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.155.214:53", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.155.214", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_HOST=10.96.155.214", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.155.214", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "fafd9c641a24bbb0e0b97dfbe356a72f77dd874da97dfcdd3943435907d872e7", "snapshotKey": "a6fbd161a80aa8213c4467ab17a539d61ed269a5d7b6c2c67e4c0c5ab326e5f5", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "1", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-07-10T11:57:45.634632038Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "a6fbd161a80aa8213c4467ab17a539d61ed269a5d7b6c2c67e4c0c5ab326e5f5", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-64fc886fd4-8ssnr", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "cba19cd1-0b27-4e2d-ba91-01b94b523f66" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-64fc886fd4-8ssnr_cba19cd1-0b27-4e2d-ba91-01b94b523f66/coredns/1.log", "message": "", "metadata": { "attempt": 1, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/volumes/kubernetes.io~projected/kube-api-access-zssnk", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/cba19cd1-0b27-4e2d-ba91-01b94b523f66/containers/coredns/32f87504", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-07-10T11:57:47.19787709Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-07-10 11:58:12] INFO -- CNTI: node_pid_by_container_id pid: 2431990 [2025-07-10 11:58:12] INFO -- CNTI: pids [2025-07-10 11:58:12] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:12] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:12] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:13] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:13] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:13] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:13] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "1\n174\n1757\n196\n2429956\n2429982\n2430009\n2430296\n2430319\n2430647\n2430672\n2430700\n2431990\n2432173\n311\n399\n401\n447\n454\n507\n696\n804\n829\n863\nacpi\nbootconfig\nbuddyinfo\nbus\ncgroups\ncmdline\nconsoles\ncpuinfo\ncrypto\ndevices\ndiskstats\ndma\ndriver\ndynamic_debug\nexecdomains\nfb\nfilesystems\nfs\ninterrupts\niomem\nioports\nirq\nkallsyms\nkcore\nkey-users\nkeys\nkmsg\nkpagecgroup\nkpagecount\nkpageflags\nloadavg\nlocks\nmdstat\nmeminfo\nmisc\nmodules\nmounts\nmtrr\nnet\npagetypeinfo\npartitions\npressure\nschedstat\nscsi\nself\nslabinfo\nsoftirqs\nstat\nswaps\nsys\nsysrq-trigger\nsysvipc\nthread-self\ntimer_list\ntty\nuptime\nversion\nversion_signature\nvmallocinfo\nvmstat\nzoneinfo\n", error: ""} [2025-07-10 11:58:13] INFO -- CNTI: pids ls_proc: {status: Process::Status[0], output: "1\n174\n1757\n196\n2429956\n2429982\n2430009\n2430296\n2430319\n2430647\n2430672\n2430700\n2431990\n2432173\n311\n399\n401\n447\n454\n507\n696\n804\n829\n863\nacpi\nbootconfig\nbuddyinfo\nbus\ncgroups\ncmdline\nconsoles\ncpuinfo\ncrypto\ndevices\ndiskstats\ndma\ndriver\ndynamic_debug\nexecdomains\nfb\nfilesystems\nfs\ninterrupts\niomem\nioports\nirq\nkallsyms\nkcore\nkey-users\nkeys\nkmsg\nkpagecgroup\nkpagecount\nkpageflags\nloadavg\nlocks\nmdstat\nmeminfo\nmisc\nmodules\nmounts\nmtrr\nnet\npagetypeinfo\npartitions\npressure\nschedstat\nscsi\nself\nslabinfo\nsoftirqs\nstat\nswaps\nsys\nsysrq-trigger\nsysvipc\nthread-self\ntimer_list\ntty\nuptime\nversion\nversion_signature\nvmallocinfo\nvmstat\nzoneinfo\n", error: ""} [2025-07-10 11:58:13] DEBUG -- CNTI: parse_ls ls: 1 174 1757 196 2429956 2429982 2430009 2430296 2430319 2430647 2430672 2430700 2431990 2432173 311 399 401 447 454 507 696 804 829 863 acpi bootconfig buddyinfo bus cgroups cmdline consoles cpuinfo crypto devices diskstats dma driver dynamic_debug execdomains fb filesystems fs interrupts iomem ioports irq kallsyms kcore key-users keys kmsg kpagecgroup kpagecount kpageflags loadavg locks mdstat meminfo misc modules mounts mtrr net pagetypeinfo partitions pressure schedstat scsi self slabinfo softirqs stat swaps sys sysrq-trigger sysvipc thread-self timer_list tty uptime version version_signature vmallocinfo vmstat zoneinfo [2025-07-10 11:58:13] DEBUG -- CNTI: parse_ls parsed: ["1", "174", "1757", "196", "2429956", "2429982", "2430009", "2430296", "2430319", "2430647", "2430672", "2430700", "2431990", "2432173", "311", "399", "401", "447", "454", "507", "696", "804", "829", "863", "acpi", "bootconfig", "buddyinfo", "bus", "cgroups", "cmdline", "consoles", "cpuinfo", "crypto", "devices", "diskstats", "dma", "driver", "dynamic_debug", "execdomains", "fb", "filesystems", "fs", "interrupts", "iomem", "ioports", "irq", "kallsyms", "kcore", "key-users", "keys", "kmsg", "kpagecgroup", "kpagecount", "kpageflags", "loadavg", "locks", "mdstat", "meminfo", "misc", "modules", "mounts", "mtrr", "net", "pagetypeinfo", "partitions", "pressure", "schedstat", "scsi", "self", "slabinfo", "softirqs", "stat", "swaps", "sys", "sysrq-trigger", "sysvipc", "thread-self", "timer_list", "tty", "uptime", "version", "version_signature", "vmallocinfo", "vmstat", "zoneinfo"] [2025-07-10 11:58:13] DEBUG -- CNTI: pids_from_ls_proc ls: ["1", "174", "1757", "196", "2429956", "2429982", "2430009", "2430296", "2430319", "2430647", "2430672", "2430700", "2431990", "2432173", "311", "399", "401", "447", "454", "507", "696", "804", "829", "863", "acpi", "bootconfig", "buddyinfo", "bus", "cgroups", "cmdline", "consoles", "cpuinfo", "crypto", "devices", "diskstats", "dma", "driver", "dynamic_debug", "execdomains", "fb", "filesystems", "fs", "interrupts", "iomem", "ioports", "irq", "kallsyms", "kcore", "key-users", "keys", "kmsg", "kpagecgroup", "kpagecount", "kpageflags", "loadavg", "locks", "mdstat", "meminfo", "misc", "modules", "mounts", "mtrr", "net", "pagetypeinfo", "partitions", "pressure", "schedstat", "scsi", "self", "slabinfo", "softirqs", "stat", "swaps", "sys", "sysrq-trigger", "sysvipc", "thread-self", "timer_list", "tty", "uptime", "version", "version_signature", "vmallocinfo", "vmstat", "zoneinfo"] [2025-07-10 11:58:13] DEBUG -- CNTI: pids_from_ls_proc pids: ["1", "174", "1757", "196", "2429956", "2429982", "2430009", "2430296", "2430319", "2430647", "2430672", "2430700", "2431990", "2432173", "311", "399", "401", "447", "454", "507", "696", "804", "829", "863"] [2025-07-10 11:58:13] INFO -- CNTI: all_statuses_by_pids [2025-07-10 11:58:13] INFO -- CNTI: all_statuses_by_pids pid: 1 [2025-07-10 11:58:13] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:13] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:13] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:13] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:13] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:13] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:13] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsystemd\nUmask:\t0000\nState:\tS (sleeping)\nTgid:\t1\nNgid:\t0\nPid:\t1\nPPid:\t0\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t1\nNSpid:\t1\nNSpgid:\t1\nNSsid:\t1\nVmPeak:\t 34384 kB\nVmSize:\t 34384 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 25732 kB\nVmRSS:\t 25732 kB\nRssAnon:\t 17128 kB\nRssFile:\t 8604 kB\nRssShmem:\t 0 kB\nVmData:\t 16376 kB\nVmStk:\t 132 kB\nVmExe:\t 40 kB\nVmLib:\t 10688 kB\nVmPTE:\t 104 kB\nVmSwap:\t 4 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t7fe3c0fe28014a03\nSigIgn:\t0000000000001000\nSigCgt:\t00000000000004ec\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t663760\nnonvoluntary_ctxt_switches:\t29775\n", error: ""} [2025-07-10 11:58:13] INFO -- CNTI: all_statuses_by_pids pid: 174 [2025-07-10 11:58:13] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:13] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:13] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:13] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:13] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:13] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:14] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsystemd-journal\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t174\nNgid:\t564957\nPid:\t174\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t174\nNSpid:\t174\nNSpgid:\t174\nNSsid:\t174\nVmPeak:\t 430536 kB\nVmSize:\t 422496 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 291312 kB\nVmRSS:\t 279928 kB\nRssAnon:\t 1132 kB\nRssFile:\t 6940 kB\nRssShmem:\t 271856 kB\nVmData:\t 8964 kB\nVmStk:\t 132 kB\nVmExe:\t 92 kB\nVmLib:\t 9736 kB\nVmPTE:\t 732 kB\nVmSwap:\t 4 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000400004a02\nSigIgn:\t0000000000001000\nSigCgt:\t0000000100000040\nCapInh:\t0000000000000000\nCapPrm:\t00000025402800cf\nCapEff:\t00000025402800cf\nCapBnd:\t00000025402800cf\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t20\nSpeculation_Store_Bypass:\tthread force mitigated\nSpeculationIndirectBranch:\tconditional force disabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t482297\nnonvoluntary_ctxt_switches:\t1269\n", error: ""} [2025-07-10 11:58:14] INFO -- CNTI: all_statuses_by_pids pid: 1757 [2025-07-10 11:58:14] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:14] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:14] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:14] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:14] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:14] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:14] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t1757\nNgid:\t0\nPid:\t1757\nPPid:\t863\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t1757\t888\nNSpid:\t1757\t888\nNSpgid:\t863\t1\nNSsid:\t863\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1532 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 20 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 40 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t9/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-07-10 11:58:14] INFO -- CNTI: all_statuses_by_pids pid: 196 [2025-07-10 11:58:14] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:14] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:14] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:14] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:14] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:14] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:14] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t196\nNgid:\t0\nPid:\t196\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t2048\nGroups:\t0 \nNStgid:\t196\nNSpid:\t196\nNSpgid:\t196\nNSsid:\t196\nVmPeak:\t 9919136 kB\nVmSize:\t 9517532 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 143424 kB\nVmRSS:\t 91648 kB\nRssAnon:\t 66124 kB\nRssFile:\t 25524 kB\nRssShmem:\t 0 kB\nVmData:\t 747944 kB\nVmStk:\t 132 kB\nVmExe:\t 18236 kB\nVmLib:\t 1524 kB\nVmPTE:\t 1316 kB\nVmSwap:\t 356 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t63\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t126\nnonvoluntary_ctxt_switches:\t2\n", error: ""} [2025-07-10 11:58:14] INFO -- CNTI: all_statuses_by_pids pid: 2429956 [2025-07-10 11:58:14] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:14] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:14] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:15] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:15] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:15] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:15] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2429956\nNgid:\t0\nPid:\t2429956\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2429956\nNSpid:\t2429956\nNSpgid:\t2429956\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11504 kB\nVmRSS:\t 11240 kB\nRssAnon:\t 3944 kB\nRssFile:\t 7296 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t31\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-07-10 11:58:15] INFO -- CNTI: all_statuses_by_pids pid: 2429982 [2025-07-10 11:58:15] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:15] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:15] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:15] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:15] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:15] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:15] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2429982\nNgid:\t0\nPid:\t2429982\nPPid:\t2429956\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t2429982\nNSpid:\t2429982\nNSpgid:\t2429982\nNSsid:\t2429982\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t27\nnonvoluntary_ctxt_switches:\t9\n", error: ""} [2025-07-10 11:58:15] INFO -- CNTI: all_statuses_by_pids pid: 2430009 [2025-07-10 11:58:15] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:15] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:15] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:15] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:15] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:15] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:16] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430009\nNgid:\t0\nPid:\t2430009\nPPid:\t2429956\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430009\nNSpid:\t2430009\nNSpgid:\t2430009\nNSsid:\t2430009\nVmPeak:\t 2488 kB\nVmSize:\t 2488 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 892 kB\nVmRSS:\t 892 kB\nRssAnon:\t 88 kB\nRssFile:\t 804 kB\nRssShmem:\t 0 kB\nVmData:\t 224 kB\nVmStk:\t 132 kB\nVmExe:\t 20 kB\nVmLib:\t 1524 kB\nVmPTE:\t 44 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t80\nnonvoluntary_ctxt_switches:\t10\n", error: ""} [2025-07-10 11:58:16] INFO -- CNTI: all_statuses_by_pids pid: 2430296 [2025-07-10 11:58:16] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:16] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:16] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:16] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:16] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:16] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:16] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430296\nNgid:\t0\nPid:\t2430296\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430296\nNSpid:\t2430296\nNSpgid:\t2430296\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10784 kB\nVmRSS:\t 10392 kB\nRssAnon:\t 3148 kB\nRssFile:\t 7244 kB\nRssShmem:\t 0 kB\nVmData:\t 44856 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 112 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t10\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-07-10 11:58:16] INFO -- CNTI: all_statuses_by_pids pid: 2430319 [2025-07-10 11:58:16] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:16] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:16] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:16] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:16] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:16] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:16] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430319\nNgid:\t0\nPid:\t2430319\nPPid:\t2430296\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t2430319\t1\nNSpid:\t2430319\t1\nNSpgid:\t2430319\t1\nNSsid:\t2430319\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t27\nnonvoluntary_ctxt_switches:\t11\n", error: ""} [2025-07-10 11:58:16] INFO -- CNTI: all_statuses_by_pids pid: 2430647 [2025-07-10 11:58:16] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:16] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:16] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:16] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:16] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:16] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:17] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430647\nNgid:\t0\nPid:\t2430647\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430647\nNSpid:\t2430647\nNSpgid:\t2430647\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10548 kB\nVmRSS:\t 10420 kB\nRssAnon:\t 3432 kB\nRssFile:\t 6988 kB\nRssShmem:\t 0 kB\nVmData:\t 41016 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t9\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-07-10 11:58:17] INFO -- CNTI: all_statuses_by_pids pid: 2430672 [2025-07-10 11:58:17] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:17] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:17] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:17] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:17] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:17] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:17] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430672\nNgid:\t0\nPid:\t2430672\nPPid:\t2430647\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t2430672\t1\nNSpid:\t2430672\t1\nNSpgid:\t2430672\t1\nNSsid:\t2430672\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t24\nnonvoluntary_ctxt_switches:\t9\n", error: ""} [2025-07-10 11:58:17] INFO -- CNTI: all_statuses_by_pids pid: 2430700 [2025-07-10 11:58:17] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:17] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:17] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:17] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:17] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:17] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:18] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tchaos-operator\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430700\nNgid:\t0\nPid:\t2430700\nPPid:\t2430647\nTracerPid:\t0\nUid:\t1000\t1000\t1000\t1000\nGid:\t1000\t1000\t1000\t1000\nFDSize:\t64\nGroups:\t1000 \nNStgid:\t2430700\t1\nNSpid:\t2430700\t1\nNSpgid:\t2430700\t1\nNSsid:\t2430700\t1\nVmPeak:\t 1261932 kB\nVmSize:\t 1261932 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 38044 kB\nVmRSS:\t 38044 kB\nRssAnon:\t 15048 kB\nRssFile:\t 22996 kB\nRssShmem:\t 0 kB\nVmData:\t 62660 kB\nVmStk:\t 132 kB\nVmExe:\t 15232 kB\nVmLib:\t 8 kB\nVmPTE:\t 188 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t34\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t699\nnonvoluntary_ctxt_switches:\t10\n", error: ""} [2025-07-10 11:58:18] INFO -- CNTI: all_statuses_by_pids pid: 2431990 [2025-07-10 11:58:18] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:18] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:18] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:18] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:18] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:18] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:18] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2431990\nNgid:\t0\nPid:\t2431990\nPPid:\t2430296\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2431990\t1\nNSpid:\t2431990\t1\nNSpgid:\t2431990\t1\nNSsid:\t2431990\t1\nVmPeak:\t 747724 kB\nVmSize:\t 747724 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 38412 kB\nVmRSS:\t 38412 kB\nRssAnon:\t 10484 kB\nRssFile:\t 27928 kB\nRssShmem:\t 0 kB\nVmData:\t 107912 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 184 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t15\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t546\nnonvoluntary_ctxt_switches:\t12\n", error: ""} [2025-07-10 11:58:18] INFO -- CNTI: all_statuses_by_pids pid: 2432173 [2025-07-10 11:58:18] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:18] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:18] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:18] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:18] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:18] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:18] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: cat: /proc/2432173/status: No such file or directory command terminated with exit code 1 [2025-07-10 11:58:18] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[1], output: "", error: "cat: /proc/2432173/status: No such file or directory\ncommand terminated with exit code 1\n"} [2025-07-10 11:58:18] INFO -- CNTI: all_statuses_by_pids pid: 311 [2025-07-10 11:58:18] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:18] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:18] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:18] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:18] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:18] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:19] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tkubelet\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t311\nNgid:\t568138\nPid:\t311\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t311\nNSpid:\t311\nNSpgid:\t311\nNSsid:\t311\nVmPeak:\t 8105456 kB\nVmSize:\t 8105456 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 126300 kB\nVmRSS:\t 111956 kB\nRssAnon:\t 77612 kB\nRssFile:\t 34344 kB\nRssShmem:\t 0 kB\nVmData:\t 920192 kB\nVmStk:\t 132 kB\nVmExe:\t 35224 kB\nVmLib:\t 1560 kB\nVmPTE:\t 1240 kB\nVmSwap:\t 704 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t92\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t5659384\nnonvoluntary_ctxt_switches:\t8413\n", error: ""} [2025-07-10 11:58:19] INFO -- CNTI: all_statuses_by_pids pid: 399 [2025-07-10 11:58:19] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:19] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:19] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:19] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:19] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:19] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:19] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t399\nNgid:\t0\nPid:\t399\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t399\nNSpid:\t399\nNSpgid:\t399\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 9908 kB\nVmRSS:\t 9644 kB\nRssAnon:\t 3180 kB\nRssFile:\t 6464 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 116 kB\nVmSwap:\t 60 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t9\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-07-10 11:58:19] INFO -- CNTI: all_statuses_by_pids pid: 401 [2025-07-10 11:58:19] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:19] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:19] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:19] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:19] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:19] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:20] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t401\nNgid:\t0\nPid:\t401\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t401\nNSpid:\t401\nNSpgid:\t401\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11008 kB\nVmRSS:\t 10656 kB\nRssAnon:\t 3156 kB\nRssFile:\t 7500 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 104 kB\nVmSwap:\t 280 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t7\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-07-10 11:58:20] INFO -- CNTI: all_statuses_by_pids pid: 447 [2025-07-10 11:58:20] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:20] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:20] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:20] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:20] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:20] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:20] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t447\nNgid:\t0\nPid:\t447\nPPid:\t399\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t447\t1\nNSpid:\t447\t1\nNSpgid:\t447\t1\nNSsid:\t447\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t22\nnonvoluntary_ctxt_switches:\t8\n", error: ""} [2025-07-10 11:58:20] INFO -- CNTI: all_statuses_by_pids pid: 454 [2025-07-10 11:58:20] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:20] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:20] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:20] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:20] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:20] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:20] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t454\nNgid:\t0\nPid:\t454\nPPid:\t401\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t454\t1\nNSpid:\t454\t1\nNSpgid:\t454\t1\nNSsid:\t454\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t25\nnonvoluntary_ctxt_switches:\t12\n", error: ""} [2025-07-10 11:58:20] INFO -- CNTI: all_statuses_by_pids pid: 507 [2025-07-10 11:58:20] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:20] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:20] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:21] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:21] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:21] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:21] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tkube-proxy\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t507\nNgid:\t568892\nPid:\t507\nPPid:\t399\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t507\t1\nNSpid:\t507\t1\nNSpgid:\t507\t1\nNSsid:\t507\t1\nVmPeak:\t 1300092 kB\nVmSize:\t 1300092 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 59292 kB\nVmRSS:\t 35556 kB\nRssAnon:\t 21008 kB\nRssFile:\t 14548 kB\nRssShmem:\t 0 kB\nVmData:\t 95744 kB\nVmStk:\t 132 kB\nVmExe:\t 30360 kB\nVmLib:\t 8 kB\nVmPTE:\t 316 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t39\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t156232\nnonvoluntary_ctxt_switches:\t444\n", error: ""} [2025-07-10 11:58:21] INFO -- CNTI: all_statuses_by_pids pid: 696 [2025-07-10 11:58:21] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:21] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:21] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:21] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:21] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:21] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:21] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tkindnetd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t696\nNgid:\t569898\nPid:\t696\nPPid:\t401\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t696\t1\nNSpid:\t696\t1\nNSpgid:\t696\t1\nNSsid:\t696\t1\nVmPeak:\t 1285448 kB\nVmSize:\t 1285448 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 46832 kB\nVmRSS:\t 27284 kB\nRssAnon:\t 15824 kB\nRssFile:\t 11460 kB\nRssShmem:\t 0 kB\nVmData:\t 72080 kB\nVmStk:\t 132 kB\nVmExe:\t 25108 kB\nVmLib:\t 8 kB\nVmPTE:\t 264 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t42\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80435fb\nCapEff:\t00000000a80435fb\nCapBnd:\t00000000a80435fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t264029\nnonvoluntary_ctxt_switches:\t8707\n", error: ""} [2025-07-10 11:58:21] INFO -- CNTI: all_statuses_by_pids pid: 804 [2025-07-10 11:58:21] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:21] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:21] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:21] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:21] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:21] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:22] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t804\nNgid:\t0\nPid:\t804\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t804\nNSpid:\t804\nNSpgid:\t804\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10624 kB\nVmRSS:\t 10400 kB\nRssAnon:\t 3220 kB\nRssFile:\t 7180 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 104 kB\nVmSwap:\t 4 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t10\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-07-10 11:58:22] INFO -- CNTI: all_statuses_by_pids pid: 829 [2025-07-10 11:58:22] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:22] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:22] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:22] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:22] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:22] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t829\nNgid:\t0\nPid:\t829\nPPid:\t804\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t829\t1\nNSpid:\t829\t1\nNSpgid:\t829\t1\nNSsid:\t829\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t24\nnonvoluntary_ctxt_switches:\t7\n", error: ""} [2025-07-10 11:58:22] INFO -- CNTI: all_statuses_by_pids pid: 863 [2025-07-10 11:58:22] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:22] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:22] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:22] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:22] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:22] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsh\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t863\nNgid:\t0\nPid:\t863\nPPid:\t804\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t863\t1\nNSpid:\t863\t1\nNSpgid:\t863\t1\nNSsid:\t863\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1564 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 1036 kB\nVmRSS:\t 84 kB\nRssAnon:\t 80 kB\nRssFile:\t 4 kB\nRssShmem:\t 0 kB\nVmData:\t 52 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 44 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000004\nSigCgt:\t0000000000010002\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t922\nnonvoluntary_ctxt_switches:\t6\n", error: ""} [2025-07-10 11:58:22] DEBUG -- CNTI: proc process_statuses_by_node: ["Name:\tsystemd\nUmask:\t0000\nState:\tS (sleeping)\nTgid:\t1\nNgid:\t0\nPid:\t1\nPPid:\t0\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t1\nNSpid:\t1\nNSpgid:\t1\nNSsid:\t1\nVmPeak:\t 34384 kB\nVmSize:\t 34384 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 25732 kB\nVmRSS:\t 25732 kB\nRssAnon:\t 17128 kB\nRssFile:\t 8604 kB\nRssShmem:\t 0 kB\nVmData:\t 16376 kB\nVmStk:\t 132 kB\nVmExe:\t 40 kB\nVmLib:\t 10688 kB\nVmPTE:\t 104 kB\nVmSwap:\t 4 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t7fe3c0fe28014a03\nSigIgn:\t0000000000001000\nSigCgt:\t00000000000004ec\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t663760\nnonvoluntary_ctxt_switches:\t29775\n", "Name:\tsystemd-journal\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t174\nNgid:\t564957\nPid:\t174\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t174\nNSpid:\t174\nNSpgid:\t174\nNSsid:\t174\nVmPeak:\t 430536 kB\nVmSize:\t 422496 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 291312 kB\nVmRSS:\t 279928 kB\nRssAnon:\t 1132 kB\nRssFile:\t 6940 kB\nRssShmem:\t 271856 kB\nVmData:\t 8964 kB\nVmStk:\t 132 kB\nVmExe:\t 92 kB\nVmLib:\t 9736 kB\nVmPTE:\t 732 kB\nVmSwap:\t 4 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000400004a02\nSigIgn:\t0000000000001000\nSigCgt:\t0000000100000040\nCapInh:\t0000000000000000\nCapPrm:\t00000025402800cf\nCapEff:\t00000025402800cf\nCapBnd:\t00000025402800cf\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t20\nSpeculation_Store_Bypass:\tthread force mitigated\nSpeculationIndirectBranch:\tconditional force disabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t482297\nnonvoluntary_ctxt_switches:\t1269\n", "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t1757\nNgid:\t0\nPid:\t1757\nPPid:\t863\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t1757\t888\nNSpid:\t1757\t888\nNSpgid:\t863\t1\nNSsid:\t863\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1532 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 20 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 40 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t9/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tcontainerd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t196\nNgid:\t0\nPid:\t196\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t2048\nGroups:\t0 \nNStgid:\t196\nNSpid:\t196\nNSpgid:\t196\nNSsid:\t196\nVmPeak:\t 9919136 kB\nVmSize:\t 9517532 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 143424 kB\nVmRSS:\t 91648 kB\nRssAnon:\t 66124 kB\nRssFile:\t 25524 kB\nRssShmem:\t 0 kB\nVmData:\t 747944 kB\nVmStk:\t 132 kB\nVmExe:\t 18236 kB\nVmLib:\t 1524 kB\nVmPTE:\t 1316 kB\nVmSwap:\t 356 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t63\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t126\nnonvoluntary_ctxt_switches:\t2\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2429956\nNgid:\t0\nPid:\t2429956\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2429956\nNSpid:\t2429956\nNSpgid:\t2429956\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11504 kB\nVmRSS:\t 11240 kB\nRssAnon:\t 3944 kB\nRssFile:\t 7296 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t31\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2429982\nNgid:\t0\nPid:\t2429982\nPPid:\t2429956\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t2429982\nNSpid:\t2429982\nNSpgid:\t2429982\nNSsid:\t2429982\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t27\nnonvoluntary_ctxt_switches:\t9\n", "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430009\nNgid:\t0\nPid:\t2430009\nPPid:\t2429956\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430009\nNSpid:\t2430009\nNSpgid:\t2430009\nNSsid:\t2430009\nVmPeak:\t 2488 kB\nVmSize:\t 2488 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 892 kB\nVmRSS:\t 892 kB\nRssAnon:\t 88 kB\nRssFile:\t 804 kB\nRssShmem:\t 0 kB\nVmData:\t 224 kB\nVmStk:\t 132 kB\nVmExe:\t 20 kB\nVmLib:\t 1524 kB\nVmPTE:\t 44 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t80\nnonvoluntary_ctxt_switches:\t10\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430296\nNgid:\t0\nPid:\t2430296\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430296\nNSpid:\t2430296\nNSpgid:\t2430296\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10784 kB\nVmRSS:\t 10392 kB\nRssAnon:\t 3148 kB\nRssFile:\t 7244 kB\nRssShmem:\t 0 kB\nVmData:\t 44856 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 112 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t10\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430319\nNgid:\t0\nPid:\t2430319\nPPid:\t2430296\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t2430319\t1\nNSpid:\t2430319\t1\nNSpgid:\t2430319\t1\nNSsid:\t2430319\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t27\nnonvoluntary_ctxt_switches:\t11\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430647\nNgid:\t0\nPid:\t2430647\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430647\nNSpid:\t2430647\nNSpgid:\t2430647\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10548 kB\nVmRSS:\t 10420 kB\nRssAnon:\t 3432 kB\nRssFile:\t 6988 kB\nRssShmem:\t 0 kB\nVmData:\t 41016 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t9\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430672\nNgid:\t0\nPid:\t2430672\nPPid:\t2430647\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t2430672\t1\nNSpid:\t2430672\t1\nNSpgid:\t2430672\t1\nNSsid:\t2430672\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t24\nnonvoluntary_ctxt_switches:\t9\n", "Name:\tchaos-operator\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430700\nNgid:\t0\nPid:\t2430700\nPPid:\t2430647\nTracerPid:\t0\nUid:\t1000\t1000\t1000\t1000\nGid:\t1000\t1000\t1000\t1000\nFDSize:\t64\nGroups:\t1000 \nNStgid:\t2430700\t1\nNSpid:\t2430700\t1\nNSpgid:\t2430700\t1\nNSsid:\t2430700\t1\nVmPeak:\t 1261932 kB\nVmSize:\t 1261932 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 38044 kB\nVmRSS:\t 38044 kB\nRssAnon:\t 15048 kB\nRssFile:\t 22996 kB\nRssShmem:\t 0 kB\nVmData:\t 62660 kB\nVmStk:\t 132 kB\nVmExe:\t 15232 kB\nVmLib:\t 8 kB\nVmPTE:\t 188 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t34\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t699\nnonvoluntary_ctxt_switches:\t10\n", "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2431990\nNgid:\t0\nPid:\t2431990\nPPid:\t2430296\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2431990\t1\nNSpid:\t2431990\t1\nNSpgid:\t2431990\t1\nNSsid:\t2431990\t1\nVmPeak:\t 747724 kB\nVmSize:\t 747724 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 38412 kB\nVmRSS:\t 38412 kB\nRssAnon:\t 10484 kB\nRssFile:\t 27928 kB\nRssShmem:\t 0 kB\nVmData:\t 107912 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 184 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t15\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t546\nnonvoluntary_ctxt_switches:\t12\n", "Name:\tkubelet\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t311\nNgid:\t568138\nPid:\t311\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t311\nNSpid:\t311\nNSpgid:\t311\nNSsid:\t311\nVmPeak:\t 8105456 kB\nVmSize:\t 8105456 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 126300 kB\nVmRSS:\t 111956 kB\nRssAnon:\t 77612 kB\nRssFile:\t 34344 kB\nRssShmem:\t 0 kB\nVmData:\t 920192 kB\nVmStk:\t 132 kB\nVmExe:\t 35224 kB\nVmLib:\t 1560 kB\nVmPTE:\t 1240 kB\nVmSwap:\t 704 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t92\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t5659384\nnonvoluntary_ctxt_switches:\t8413\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t399\nNgid:\t0\nPid:\t399\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t399\nNSpid:\t399\nNSpgid:\t399\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 9908 kB\nVmRSS:\t 9644 kB\nRssAnon:\t 3180 kB\nRssFile:\t 6464 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 116 kB\nVmSwap:\t 60 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t9\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t401\nNgid:\t0\nPid:\t401\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t401\nNSpid:\t401\nNSpgid:\t401\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11008 kB\nVmRSS:\t 10656 kB\nRssAnon:\t 3156 kB\nRssFile:\t 7500 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 104 kB\nVmSwap:\t 280 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t7\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t447\nNgid:\t0\nPid:\t447\nPPid:\t399\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t447\t1\nNSpid:\t447\t1\nNSpgid:\t447\t1\nNSsid:\t447\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t22\nnonvoluntary_ctxt_switches:\t8\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t454\nNgid:\t0\nPid:\t454\nPPid:\t401\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t454\t1\nNSpid:\t454\t1\nNSpgid:\t454\t1\nNSsid:\t454\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t25\nnonvoluntary_ctxt_switches:\t12\n", "Name:\tkube-proxy\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t507\nNgid:\t568892\nPid:\t507\nPPid:\t399\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t507\t1\nNSpid:\t507\t1\nNSpgid:\t507\t1\nNSsid:\t507\t1\nVmPeak:\t 1300092 kB\nVmSize:\t 1300092 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 59292 kB\nVmRSS:\t 35556 kB\nRssAnon:\t 21008 kB\nRssFile:\t 14548 kB\nRssShmem:\t 0 kB\nVmData:\t 95744 kB\nVmStk:\t 132 kB\nVmExe:\t 30360 kB\nVmLib:\t 8 kB\nVmPTE:\t 316 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t39\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t156232\nnonvoluntary_ctxt_switches:\t444\n", "Name:\tkindnetd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t696\nNgid:\t569898\nPid:\t696\nPPid:\t401\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t696\t1\nNSpid:\t696\t1\nNSpgid:\t696\t1\nNSsid:\t696\t1\nVmPeak:\t 1285448 kB\nVmSize:\t 1285448 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 46832 kB\nVmRSS:\t 27284 kB\nRssAnon:\t 15824 kB\nRssFile:\t 11460 kB\nRssShmem:\t 0 kB\nVmData:\t 72080 kB\nVmStk:\t 132 kB\nVmExe:\t 25108 kB\nVmLib:\t 8 kB\nVmPTE:\t 264 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t42\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80435fb\nCapEff:\t00000000a80435fb\nCapBnd:\t00000000a80435fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t264029\nnonvoluntary_ctxt_switches:\t8707\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t804\nNgid:\t0\nPid:\t804\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t804\nNSpid:\t804\nNSpgid:\t804\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10624 kB\nVmRSS:\t 10400 kB\nRssAnon:\t 3220 kB\nRssFile:\t 7180 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 104 kB\nVmSwap:\t 4 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t10\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t829\nNgid:\t0\nPid:\t829\nPPid:\t804\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t829\t1\nNSpid:\t829\t1\nNSpgid:\t829\t1\nNSsid:\t829\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t24\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tsh\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t863\nNgid:\t0\nPid:\t863\nPPid:\t804\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t863\t1\nNSpid:\t863\t1\nNSpgid:\t863\t1\nNSsid:\t863\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1564 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 1036 kB\nVmRSS:\t 84 kB\nRssAnon:\t 80 kB\nRssFile:\t 4 kB\nRssShmem:\t 0 kB\nVmData:\t 52 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 44 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000004\nSigCgt:\t0000000000010002\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t922\nnonvoluntary_ctxt_switches:\t6\n"] [2025-07-10 11:58:22] INFO -- CNTI-proctree_by_pid: proctree_by_pid potential_parent_pid: 2431990 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: proc_statuses: ["Name:\tsystemd\nUmask:\t0000\nState:\tS (sleeping)\nTgid:\t1\nNgid:\t0\nPid:\t1\nPPid:\t0\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t1\nNSpid:\t1\nNSpgid:\t1\nNSsid:\t1\nVmPeak:\t 34384 kB\nVmSize:\t 34384 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 25732 kB\nVmRSS:\t 25732 kB\nRssAnon:\t 17128 kB\nRssFile:\t 8604 kB\nRssShmem:\t 0 kB\nVmData:\t 16376 kB\nVmStk:\t 132 kB\nVmExe:\t 40 kB\nVmLib:\t 10688 kB\nVmPTE:\t 104 kB\nVmSwap:\t 4 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t7fe3c0fe28014a03\nSigIgn:\t0000000000001000\nSigCgt:\t00000000000004ec\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t663760\nnonvoluntary_ctxt_switches:\t29775\n", "Name:\tsystemd-journal\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t174\nNgid:\t564957\nPid:\t174\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t174\nNSpid:\t174\nNSpgid:\t174\nNSsid:\t174\nVmPeak:\t 430536 kB\nVmSize:\t 422496 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 291312 kB\nVmRSS:\t 279928 kB\nRssAnon:\t 1132 kB\nRssFile:\t 6940 kB\nRssShmem:\t 271856 kB\nVmData:\t 8964 kB\nVmStk:\t 132 kB\nVmExe:\t 92 kB\nVmLib:\t 9736 kB\nVmPTE:\t 732 kB\nVmSwap:\t 4 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000400004a02\nSigIgn:\t0000000000001000\nSigCgt:\t0000000100000040\nCapInh:\t0000000000000000\nCapPrm:\t00000025402800cf\nCapEff:\t00000025402800cf\nCapBnd:\t00000025402800cf\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t20\nSpeculation_Store_Bypass:\tthread force mitigated\nSpeculationIndirectBranch:\tconditional force disabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t482297\nnonvoluntary_ctxt_switches:\t1269\n", "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t1757\nNgid:\t0\nPid:\t1757\nPPid:\t863\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t1757\t888\nNSpid:\t1757\t888\nNSpgid:\t863\t1\nNSsid:\t863\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1532 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 20 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 40 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t9/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tcontainerd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t196\nNgid:\t0\nPid:\t196\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t2048\nGroups:\t0 \nNStgid:\t196\nNSpid:\t196\nNSpgid:\t196\nNSsid:\t196\nVmPeak:\t 9919136 kB\nVmSize:\t 9517532 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 143424 kB\nVmRSS:\t 91648 kB\nRssAnon:\t 66124 kB\nRssFile:\t 25524 kB\nRssShmem:\t 0 kB\nVmData:\t 747944 kB\nVmStk:\t 132 kB\nVmExe:\t 18236 kB\nVmLib:\t 1524 kB\nVmPTE:\t 1316 kB\nVmSwap:\t 356 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t63\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t126\nnonvoluntary_ctxt_switches:\t2\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2429956\nNgid:\t0\nPid:\t2429956\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2429956\nNSpid:\t2429956\nNSpgid:\t2429956\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11504 kB\nVmRSS:\t 11240 kB\nRssAnon:\t 3944 kB\nRssFile:\t 7296 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t31\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2429982\nNgid:\t0\nPid:\t2429982\nPPid:\t2429956\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t2429982\nNSpid:\t2429982\nNSpgid:\t2429982\nNSsid:\t2429982\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t27\nnonvoluntary_ctxt_switches:\t9\n", "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430009\nNgid:\t0\nPid:\t2430009\nPPid:\t2429956\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430009\nNSpid:\t2430009\nNSpgid:\t2430009\nNSsid:\t2430009\nVmPeak:\t 2488 kB\nVmSize:\t 2488 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 892 kB\nVmRSS:\t 892 kB\nRssAnon:\t 88 kB\nRssFile:\t 804 kB\nRssShmem:\t 0 kB\nVmData:\t 224 kB\nVmStk:\t 132 kB\nVmExe:\t 20 kB\nVmLib:\t 1524 kB\nVmPTE:\t 44 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t80\nnonvoluntary_ctxt_switches:\t10\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430296\nNgid:\t0\nPid:\t2430296\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430296\nNSpid:\t2430296\nNSpgid:\t2430296\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10784 kB\nVmRSS:\t 10392 kB\nRssAnon:\t 3148 kB\nRssFile:\t 7244 kB\nRssShmem:\t 0 kB\nVmData:\t 44856 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 112 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t10\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430319\nNgid:\t0\nPid:\t2430319\nPPid:\t2430296\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t2430319\t1\nNSpid:\t2430319\t1\nNSpgid:\t2430319\t1\nNSsid:\t2430319\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t27\nnonvoluntary_ctxt_switches:\t11\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430647\nNgid:\t0\nPid:\t2430647\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2430647\nNSpid:\t2430647\nNSpgid:\t2430647\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10548 kB\nVmRSS:\t 10420 kB\nRssAnon:\t 3432 kB\nRssFile:\t 6988 kB\nRssShmem:\t 0 kB\nVmData:\t 41016 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t9\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430672\nNgid:\t0\nPid:\t2430672\nPPid:\t2430647\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t2430672\t1\nNSpid:\t2430672\t1\nNSpgid:\t2430672\t1\nNSsid:\t2430672\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t24\nnonvoluntary_ctxt_switches:\t9\n", "Name:\tchaos-operator\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2430700\nNgid:\t0\nPid:\t2430700\nPPid:\t2430647\nTracerPid:\t0\nUid:\t1000\t1000\t1000\t1000\nGid:\t1000\t1000\t1000\t1000\nFDSize:\t64\nGroups:\t1000 \nNStgid:\t2430700\t1\nNSpid:\t2430700\t1\nNSpgid:\t2430700\t1\nNSsid:\t2430700\t1\nVmPeak:\t 1261932 kB\nVmSize:\t 1261932 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 38044 kB\nVmRSS:\t 38044 kB\nRssAnon:\t 15048 kB\nRssFile:\t 22996 kB\nRssShmem:\t 0 kB\nVmData:\t 62660 kB\nVmStk:\t 132 kB\nVmExe:\t 15232 kB\nVmLib:\t 8 kB\nVmPTE:\t 188 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t34\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t699\nnonvoluntary_ctxt_switches:\t10\n", "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t2431990\nNgid:\t0\nPid:\t2431990\nPPid:\t2430296\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t2431990\t1\nNSpid:\t2431990\t1\nNSpgid:\t2431990\t1\nNSsid:\t2431990\t1\nVmPeak:\t 747724 kB\nVmSize:\t 747724 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 38412 kB\nVmRSS:\t 38412 kB\nRssAnon:\t 10484 kB\nRssFile:\t 27928 kB\nRssShmem:\t 0 kB\nVmData:\t 107912 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 184 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t15\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t546\nnonvoluntary_ctxt_switches:\t12\n", "Name:\tkubelet\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t311\nNgid:\t568138\nPid:\t311\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t311\nNSpid:\t311\nNSpgid:\t311\nNSsid:\t311\nVmPeak:\t 8105456 kB\nVmSize:\t 8105456 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 126300 kB\nVmRSS:\t 111956 kB\nRssAnon:\t 77612 kB\nRssFile:\t 34344 kB\nRssShmem:\t 0 kB\nVmData:\t 920192 kB\nVmStk:\t 132 kB\nVmExe:\t 35224 kB\nVmLib:\t 1560 kB\nVmPTE:\t 1240 kB\nVmSwap:\t 704 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t92\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t5659384\nnonvoluntary_ctxt_switches:\t8413\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t399\nNgid:\t0\nPid:\t399\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t399\nNSpid:\t399\nNSpgid:\t399\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 9908 kB\nVmRSS:\t 9644 kB\nRssAnon:\t 3180 kB\nRssFile:\t 6464 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 116 kB\nVmSwap:\t 60 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t9\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t401\nNgid:\t0\nPid:\t401\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t401\nNSpid:\t401\nNSpgid:\t401\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11008 kB\nVmRSS:\t 10656 kB\nRssAnon:\t 3156 kB\nRssFile:\t 7500 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 104 kB\nVmSwap:\t 280 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t7\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t447\nNgid:\t0\nPid:\t447\nPPid:\t399\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t447\t1\nNSpid:\t447\t1\nNSpgid:\t447\t1\nNSsid:\t447\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t22\nnonvoluntary_ctxt_switches:\t8\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t454\nNgid:\t0\nPid:\t454\nPPid:\t401\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t454\t1\nNSpid:\t454\t1\nNSpgid:\t454\t1\nNSsid:\t454\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t25\nnonvoluntary_ctxt_switches:\t12\n", "Name:\tkube-proxy\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t507\nNgid:\t568892\nPid:\t507\nPPid:\t399\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t507\t1\nNSpid:\t507\t1\nNSpgid:\t507\t1\nNSsid:\t507\t1\nVmPeak:\t 1300092 kB\nVmSize:\t 1300092 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 59292 kB\nVmRSS:\t 35556 kB\nRssAnon:\t 21008 kB\nRssFile:\t 14548 kB\nRssShmem:\t 0 kB\nVmData:\t 95744 kB\nVmStk:\t 132 kB\nVmExe:\t 30360 kB\nVmLib:\t 8 kB\nVmPTE:\t 316 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t39\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t156232\nnonvoluntary_ctxt_switches:\t444\n", "Name:\tkindnetd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t696\nNgid:\t569898\nPid:\t696\nPPid:\t401\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t696\t1\nNSpid:\t696\t1\nNSpgid:\t696\t1\nNSsid:\t696\t1\nVmPeak:\t 1285448 kB\nVmSize:\t 1285448 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 46832 kB\nVmRSS:\t 27284 kB\nRssAnon:\t 15824 kB\nRssFile:\t 11460 kB\nRssShmem:\t 0 kB\nVmData:\t 72080 kB\nVmStk:\t 132 kB\nVmExe:\t 25108 kB\nVmLib:\t 8 kB\nVmPTE:\t 264 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t42\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80435fb\nCapEff:\t00000000a80435fb\nCapBnd:\t00000000a80435fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t264029\nnonvoluntary_ctxt_switches:\t8707\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t804\nNgid:\t0\nPid:\t804\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t804\nNSpid:\t804\nNSpgid:\t804\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10624 kB\nVmRSS:\t 10400 kB\nRssAnon:\t 3220 kB\nRssFile:\t 7180 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 104 kB\nVmSwap:\t 4 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t13\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t10\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t829\nNgid:\t0\nPid:\t829\nPPid:\t804\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t829\t1\nNSpid:\t829\t1\nNSpgid:\t829\t1\nNSsid:\t829\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t24\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tsh\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t863\nNgid:\t0\nPid:\t863\nPPid:\t804\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t863\t1\nNSpid:\t863\t1\nNSpgid:\t863\t1\nNSsid:\t863\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1564 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 1036 kB\nVmRSS:\t 84 kB\nRssAnon:\t 80 kB\nRssFile:\t 4 kB\nRssShmem:\t 0 kB\nVmData:\t 52 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 44 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000004\nSigCgt:\t0000000000010002\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t922\nnonvoluntary_ctxt_switches:\t6\n"] [2025-07-10 11:58:22] DEBUG -- CNTI: parse_status status_output: Name: systemd Umask: 0000 State: S (sleeping) Tgid: 1 Ngid: 0 Pid: 1 PPid: 0 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 256 Groups: 0 NStgid: 1 NSpid: 1 NSpgid: 1 NSsid: 1 VmPeak: 34384 kB VmSize: 34384 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 25732 kB VmRSS: 25732 kB RssAnon: 17128 kB RssFile: 8604 kB RssShmem: 0 kB VmData: 16376 kB VmStk: 132 kB VmExe: 40 kB VmLib: 10688 kB VmPTE: 104 kB VmSwap: 4 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 7fe3c0fe28014a03 SigIgn: 0000000000001000 SigCgt: 00000000000004ec CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 663760 nonvoluntary_ctxt_switches: 29775 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "systemd", "Umask" => "0000", "State" => "S (sleeping)", "Tgid" => "1", "Ngid" => "0", "Pid" => "1", "PPid" => "0", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "256", "Groups" => "0", "NStgid" => "1", "NSpid" => "1", "NSpgid" => "1", "NSsid" => "1", "VmPeak" => "34384 kB", "VmSize" => "34384 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "25732 kB", "VmRSS" => "25732 kB", "RssAnon" => "17128 kB", "RssFile" => "8604 kB", "RssShmem" => "0 kB", "VmData" => "16376 kB", "VmStk" => "132 kB", "VmExe" => "40 kB", "VmLib" => "10688 kB", "VmPTE" => "104 kB", "VmSwap" => "4 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "7fe3c0fe28014a03", "SigIgn" => "0000000000001000", "SigCgt" => "00000000000004ec", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "663760", "nonvoluntary_ctxt_switches" => "29775"} [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:22] DEBUG -- CNTI: parse_status status_output: Name: systemd-journal Umask: 0022 State: S (sleeping) Tgid: 174 Ngid: 564957 Pid: 174 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 174 NSpid: 174 NSpgid: 174 NSsid: 174 VmPeak: 430536 kB VmSize: 422496 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 291312 kB VmRSS: 279928 kB RssAnon: 1132 kB RssFile: 6940 kB RssShmem: 271856 kB VmData: 8964 kB VmStk: 132 kB VmExe: 92 kB VmLib: 9736 kB VmPTE: 732 kB VmSwap: 4 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000400004a02 SigIgn: 0000000000001000 SigCgt: 0000000100000040 CapInh: 0000000000000000 CapPrm: 00000025402800cf CapEff: 00000025402800cf CapBnd: 00000025402800cf CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 2 Seccomp_filters: 20 Speculation_Store_Bypass: thread force mitigated SpeculationIndirectBranch: conditional force disabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 482297 nonvoluntary_ctxt_switches: 1269 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "systemd-journal", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "174", "Ngid" => "564957", "Pid" => "174", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "174", "NSpid" => "174", "NSpgid" => "174", "NSsid" => "174", "VmPeak" => "430536 kB", "VmSize" => "422496 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "291312 kB", "VmRSS" => "279928 kB", "RssAnon" => "1132 kB", "RssFile" => "6940 kB", "RssShmem" => "271856 kB", "VmData" => "8964 kB", "VmStk" => "132 kB", "VmExe" => "92 kB", "VmLib" => "9736 kB", "VmPTE" => "732 kB", "VmSwap" => "4 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000400004a02", "SigIgn" => "0000000000001000", "SigCgt" => "0000000100000040", "CapInh" => "0000000000000000", "CapPrm" => "00000025402800cf", "CapEff" => "00000025402800cf", "CapBnd" => "00000025402800cf", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "2", "Seccomp_filters" => "20", "Speculation_Store_Bypass" => "thread force mitigated", "SpeculationIndirectBranch" => "conditional force disabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "482297", "nonvoluntary_ctxt_switches" => "1269"} [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:22] DEBUG -- CNTI: parse_status status_output: Name: sleep Umask: 0022 State: S (sleeping) Tgid: 1757 Ngid: 0 Pid: 1757 PPid: 863 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 1 2 3 4 6 10 11 20 26 27 NStgid: 1757 888 NSpid: 1757 888 NSpgid: 863 1 NSsid: 863 1 VmPeak: 3552 kB VmSize: 1532 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 20 kB VmStk: 132 kB VmExe: 788 kB VmLib: 556 kB VmPTE: 40 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 9/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 1 nonvoluntary_ctxt_switches: 0 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sleep", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "1757", "Ngid" => "0", "Pid" => "1757", "PPid" => "863", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0 1 2 3 4 6 10 11 20 26 27", "NStgid" => "1757\t888", "NSpid" => "1757\t888", "NSpgid" => "863\t1", "NSsid" => "863\t1", "VmPeak" => "3552 kB", "VmSize" => "1532 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "20 kB", "VmStk" => "132 kB", "VmExe" => "788 kB", "VmLib" => "556 kB", "VmPTE" => "40 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "9/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "1", "nonvoluntary_ctxt_switches" => "0"} [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:22] DEBUG -- CNTI: parse_status status_output: Name: containerd Umask: 0022 State: S (sleeping) Tgid: 196 Ngid: 0 Pid: 196 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 2048 Groups: 0 NStgid: 196 NSpid: 196 NSpgid: 196 NSsid: 196 VmPeak: 9919136 kB VmSize: 9517532 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 143424 kB VmRSS: 91648 kB RssAnon: 66124 kB RssFile: 25524 kB RssShmem: 0 kB VmData: 747944 kB VmStk: 132 kB VmExe: 18236 kB VmLib: 1524 kB VmPTE: 1316 kB VmSwap: 356 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 63 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 126 nonvoluntary_ctxt_switches: 2 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "196", "Ngid" => "0", "Pid" => "196", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "2048", "Groups" => "0", "NStgid" => "196", "NSpid" => "196", "NSpgid" => "196", "NSsid" => "196", "VmPeak" => "9919136 kB", "VmSize" => "9517532 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "143424 kB", "VmRSS" => "91648 kB", "RssAnon" => "66124 kB", "RssFile" => "25524 kB", "RssShmem" => "0 kB", "VmData" => "747944 kB", "VmStk" => "132 kB", "VmExe" => "18236 kB", "VmLib" => "1524 kB", "VmPTE" => "1316 kB", "VmSwap" => "356 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "63", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "126", "nonvoluntary_ctxt_switches" => "2"} [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:22] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 2429956 Ngid: 0 Pid: 2429956 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 2429956 NSpid: 2429956 NSpgid: 2429956 NSsid: 196 VmPeak: 1233804 kB VmSize: 1233804 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 11504 kB VmRSS: 11240 kB RssAnon: 3944 kB RssFile: 7296 kB RssShmem: 0 kB VmData: 45112 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 108 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 12 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 31 nonvoluntary_ctxt_switches: 0 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2429956", "Ngid" => "0", "Pid" => "2429956", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "2429956", "NSpid" => "2429956", "NSpgid" => "2429956", "NSsid" => "196", "VmPeak" => "1233804 kB", "VmSize" => "1233804 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "11504 kB", "VmRSS" => "11240 kB", "RssAnon" => "3944 kB", "RssFile" => "7296 kB", "RssShmem" => "0 kB", "VmData" => "45112 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "108 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "12", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "31", "nonvoluntary_ctxt_switches" => "0"} [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:22] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 2429982 Ngid: 0 Pid: 2429982 PPid: 2429956 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 2429982 NSpid: 2429982 NSpgid: 2429982 NSsid: 2429982 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 27 nonvoluntary_ctxt_switches: 9 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2429982", "Ngid" => "0", "Pid" => "2429982", "PPid" => "2429956", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "2429982", "NSpid" => "2429982", "NSpgid" => "2429982", "NSsid" => "2429982", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "27", "nonvoluntary_ctxt_switches" => "9"} [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:22] DEBUG -- CNTI: parse_status status_output: Name: sleep Umask: 0022 State: S (sleeping) Tgid: 2430009 Ngid: 0 Pid: 2430009 PPid: 2429956 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 2430009 NSpid: 2430009 NSpgid: 2430009 NSsid: 2430009 VmPeak: 2488 kB VmSize: 2488 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 892 kB VmRSS: 892 kB RssAnon: 88 kB RssFile: 804 kB RssShmem: 0 kB VmData: 224 kB VmStk: 132 kB VmExe: 20 kB VmLib: 1524 kB VmPTE: 44 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 80 nonvoluntary_ctxt_switches: 10 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sleep", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2430009", "Ngid" => "0", "Pid" => "2430009", "PPid" => "2429956", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "2430009", "NSpid" => "2430009", "NSpgid" => "2430009", "NSsid" => "2430009", "VmPeak" => "2488 kB", "VmSize" => "2488 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "892 kB", "VmRSS" => "892 kB", "RssAnon" => "88 kB", "RssFile" => "804 kB", "RssShmem" => "0 kB", "VmData" => "224 kB", "VmStk" => "132 kB", "VmExe" => "20 kB", "VmLib" => "1524 kB", "VmPTE" => "44 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "80", "nonvoluntary_ctxt_switches" => "10"} [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:22] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 2430296 Ngid: 0 Pid: 2430296 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 2430296 NSpid: 2430296 NSpgid: 2430296 NSsid: 196 VmPeak: 1233548 kB VmSize: 1233548 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 10784 kB VmRSS: 10392 kB RssAnon: 3148 kB RssFile: 7244 kB RssShmem: 0 kB VmData: 44856 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 112 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 12 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 10 nonvoluntary_ctxt_switches: 0 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2430296", "Ngid" => "0", "Pid" => "2430296", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "2430296", "NSpid" => "2430296", "NSpgid" => "2430296", "NSsid" => "196", "VmPeak" => "1233548 kB", "VmSize" => "1233548 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "10784 kB", "VmRSS" => "10392 kB", "RssAnon" => "3148 kB", "RssFile" => "7244 kB", "RssShmem" => "0 kB", "VmData" => "44856 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "112 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "12", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "10", "nonvoluntary_ctxt_switches" => "0"} [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:22] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 2430319 Ngid: 0 Pid: 2430319 PPid: 2430296 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 2430319 1 NSpid: 2430319 1 NSpgid: 2430319 1 NSsid: 2430319 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 2 Seccomp_filters: 1 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 27 nonvoluntary_ctxt_switches: 11 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2430319", "Ngid" => "0", "Pid" => "2430319", "PPid" => "2430296", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "2430319\t1", "NSpid" => "2430319\t1", "NSpgid" => "2430319\t1", "NSsid" => "2430319\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "2", "Seccomp_filters" => "1", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "27", "nonvoluntary_ctxt_switches" => "11"} [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:22] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 2430647 Ngid: 0 Pid: 2430647 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 2430647 NSpid: 2430647 NSpgid: 2430647 NSsid: 196 VmPeak: 1233804 kB VmSize: 1233804 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 10548 kB VmRSS: 10420 kB RssAnon: 3432 kB RssFile: 6988 kB RssShmem: 0 kB VmData: 41016 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 108 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 12 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 9 nonvoluntary_ctxt_switches: 0 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2430647", "Ngid" => "0", "Pid" => "2430647", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "2430647", "NSpid" => "2430647", "NSpgid" => "2430647", "NSsid" => "196", "VmPeak" => "1233804 kB", "VmSize" => "1233804 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "10548 kB", "VmRSS" => "10420 kB", "RssAnon" => "3432 kB", "RssFile" => "6988 kB", "RssShmem" => "0 kB", "VmData" => "41016 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "108 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "12", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "9", "nonvoluntary_ctxt_switches" => "0"} [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:22] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 2430672 Ngid: 0 Pid: 2430672 PPid: 2430647 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 2430672 1 NSpid: 2430672 1 NSpgid: 2430672 1 NSsid: 2430672 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 2 Seccomp_filters: 1 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 24 nonvoluntary_ctxt_switches: 9 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2430672", "Ngid" => "0", "Pid" => "2430672", "PPid" => "2430647", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "2430672\t1", "NSpid" => "2430672\t1", "NSpgid" => "2430672\t1", "NSsid" => "2430672\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "2", "Seccomp_filters" => "1", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "24", "nonvoluntary_ctxt_switches" => "9"} [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:22] DEBUG -- CNTI: parse_status status_output: Name: chaos-operator Umask: 0022 State: S (sleeping) Tgid: 2430700 Ngid: 0 Pid: 2430700 PPid: 2430647 TracerPid: 0 Uid: 1000 1000 1000 1000 Gid: 1000 1000 1000 1000 FDSize: 64 Groups: 1000 NStgid: 2430700 1 NSpid: 2430700 1 NSpgid: 2430700 1 NSsid: 2430700 1 VmPeak: 1261932 kB VmSize: 1261932 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 38044 kB VmRSS: 38044 kB RssAnon: 15048 kB RssFile: 22996 kB RssShmem: 0 kB VmData: 62660 kB VmStk: 132 kB VmExe: 15232 kB VmLib: 8 kB VmPTE: 188 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 34 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 699 nonvoluntary_ctxt_switches: 10 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "chaos-operator", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2430700", "Ngid" => "0", "Pid" => "2430700", "PPid" => "2430647", "TracerPid" => "0", "Uid" => "1000\t1000\t1000\t1000", "Gid" => "1000\t1000\t1000\t1000", "FDSize" => "64", "Groups" => "1000", "NStgid" => "2430700\t1", "NSpid" => "2430700\t1", "NSpgid" => "2430700\t1", "NSsid" => "2430700\t1", "VmPeak" => "1261932 kB", "VmSize" => "1261932 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "38044 kB", "VmRSS" => "38044 kB", "RssAnon" => "15048 kB", "RssFile" => "22996 kB", "RssShmem" => "0 kB", "VmData" => "62660 kB", "VmStk" => "132 kB", "VmExe" => "15232 kB", "VmLib" => "8 kB", "VmPTE" => "188 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "34", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "699", "nonvoluntary_ctxt_switches" => "10"} [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:22] DEBUG -- CNTI: parse_status status_output: Name: coredns Umask: 0022 State: S (sleeping) Tgid: 2431990 Ngid: 0 Pid: 2431990 PPid: 2430296 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 2431990 1 NSpid: 2431990 1 NSpgid: 2431990 1 NSsid: 2431990 1 VmPeak: 747724 kB VmSize: 747724 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 38412 kB VmRSS: 38412 kB RssAnon: 10484 kB RssFile: 27928 kB RssShmem: 0 kB VmData: 107912 kB VmStk: 132 kB VmExe: 22032 kB VmLib: 8 kB VmPTE: 184 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 15 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffe7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 546 nonvoluntary_ctxt_switches: 12 [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2431990", "Ngid" => "0", "Pid" => "2431990", "PPid" => "2430296", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "2431990\t1", "NSpid" => "2431990\t1", "NSpgid" => "2431990\t1", "NSsid" => "2431990\t1", "VmPeak" => "747724 kB", "VmSize" => "747724 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "38412 kB", "VmRSS" => "38412 kB", "RssAnon" => "10484 kB", "RssFile" => "27928 kB", "RssShmem" => "0 kB", "VmData" => "107912 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "184 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "15", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "546", "nonvoluntary_ctxt_switches" => "12"} [2025-07-10 11:58:22] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:22] INFO -- CNTI: cmdline_by_pid [2025-07-10 11:58:22] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:22] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:23] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:23] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:23] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:23] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-07-10 11:58:23] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: current_pid == potential_parent_pid [2025-07-10 11:58:23] DEBUG -- CNTI: parse_status status_output: Name: kubelet Umask: 0022 State: S (sleeping) Tgid: 311 Ngid: 568138 Pid: 311 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 256 Groups: 0 NStgid: 311 NSpid: 311 NSpgid: 311 NSsid: 311 VmPeak: 8105456 kB VmSize: 8105456 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 126300 kB VmRSS: 111956 kB RssAnon: 77612 kB RssFile: 34344 kB RssShmem: 0 kB VmData: 920192 kB VmStk: 132 kB VmExe: 35224 kB VmLib: 1560 kB VmPTE: 1240 kB VmSwap: 704 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 92 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 5659384 nonvoluntary_ctxt_switches: 8413 [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "kubelet", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "311", "Ngid" => "568138", "Pid" => "311", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "256", "Groups" => "0", "NStgid" => "311", "NSpid" => "311", "NSpgid" => "311", "NSsid" => "311", "VmPeak" => "8105456 kB", "VmSize" => "8105456 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "126300 kB", "VmRSS" => "111956 kB", "RssAnon" => "77612 kB", "RssFile" => "34344 kB", "RssShmem" => "0 kB", "VmData" => "920192 kB", "VmStk" => "132 kB", "VmExe" => "35224 kB", "VmLib" => "1560 kB", "VmPTE" => "1240 kB", "VmSwap" => "704 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "92", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "5659384", "nonvoluntary_ctxt_switches" => "8413"} [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:23] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 399 Ngid: 0 Pid: 399 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 399 NSpid: 399 NSpgid: 399 NSsid: 196 VmPeak: 1233804 kB VmSize: 1233804 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 9908 kB VmRSS: 9644 kB RssAnon: 3180 kB RssFile: 6464 kB RssShmem: 0 kB VmData: 45112 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 116 kB VmSwap: 60 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 13 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 9 nonvoluntary_ctxt_switches: 0 [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "399", "Ngid" => "0", "Pid" => "399", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "399", "NSpid" => "399", "NSpgid" => "399", "NSsid" => "196", "VmPeak" => "1233804 kB", "VmSize" => "1233804 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "9908 kB", "VmRSS" => "9644 kB", "RssAnon" => "3180 kB", "RssFile" => "6464 kB", "RssShmem" => "0 kB", "VmData" => "45112 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "116 kB", "VmSwap" => "60 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "13", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "9", "nonvoluntary_ctxt_switches" => "0"} [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:23] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 401 Ngid: 0 Pid: 401 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 401 NSpid: 401 NSpgid: 401 NSsid: 196 VmPeak: 1233548 kB VmSize: 1233548 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 11008 kB VmRSS: 10656 kB RssAnon: 3156 kB RssFile: 7500 kB RssShmem: 0 kB VmData: 40760 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 104 kB VmSwap: 280 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 13 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 7 nonvoluntary_ctxt_switches: 0 [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "401", "Ngid" => "0", "Pid" => "401", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "401", "NSpid" => "401", "NSpgid" => "401", "NSsid" => "196", "VmPeak" => "1233548 kB", "VmSize" => "1233548 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "11008 kB", "VmRSS" => "10656 kB", "RssAnon" => "3156 kB", "RssFile" => "7500 kB", "RssShmem" => "0 kB", "VmData" => "40760 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "104 kB", "VmSwap" => "280 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "13", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "7", "nonvoluntary_ctxt_switches" => "0"} [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:23] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 447 Ngid: 0 Pid: 447 PPid: 399 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 447 1 NSpid: 447 1 NSpgid: 447 1 NSsid: 447 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 22 nonvoluntary_ctxt_switches: 8 [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "447", "Ngid" => "0", "Pid" => "447", "PPid" => "399", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "447\t1", "NSpid" => "447\t1", "NSpgid" => "447\t1", "NSsid" => "447\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "22", "nonvoluntary_ctxt_switches" => "8"} [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:23] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 454 Ngid: 0 Pid: 454 PPid: 401 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 454 1 NSpid: 454 1 NSpgid: 454 1 NSsid: 454 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 2 Seccomp_filters: 1 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 25 nonvoluntary_ctxt_switches: 12 [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "454", "Ngid" => "0", "Pid" => "454", "PPid" => "401", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "454\t1", "NSpid" => "454\t1", "NSpgid" => "454\t1", "NSsid" => "454\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "2", "Seccomp_filters" => "1", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "25", "nonvoluntary_ctxt_switches" => "12"} [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:23] DEBUG -- CNTI: parse_status status_output: Name: kube-proxy Umask: 0022 State: S (sleeping) Tgid: 507 Ngid: 568892 Pid: 507 PPid: 399 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 507 1 NSpid: 507 1 NSpgid: 507 1 NSsid: 507 1 VmPeak: 1300092 kB VmSize: 1300092 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 59292 kB VmRSS: 35556 kB RssAnon: 21008 kB RssFile: 14548 kB RssShmem: 0 kB VmData: 95744 kB VmStk: 132 kB VmExe: 30360 kB VmLib: 8 kB VmPTE: 316 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 39 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 156232 nonvoluntary_ctxt_switches: 444 [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "kube-proxy", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "507", "Ngid" => "568892", "Pid" => "507", "PPid" => "399", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "507\t1", "NSpid" => "507\t1", "NSpgid" => "507\t1", "NSsid" => "507\t1", "VmPeak" => "1300092 kB", "VmSize" => "1300092 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "59292 kB", "VmRSS" => "35556 kB", "RssAnon" => "21008 kB", "RssFile" => "14548 kB", "RssShmem" => "0 kB", "VmData" => "95744 kB", "VmStk" => "132 kB", "VmExe" => "30360 kB", "VmLib" => "8 kB", "VmPTE" => "316 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "39", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "156232", "nonvoluntary_ctxt_switches" => "444"} [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:23] DEBUG -- CNTI: parse_status status_output: Name: kindnetd Umask: 0022 State: S (sleeping) Tgid: 696 Ngid: 569898 Pid: 696 PPid: 401 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 696 1 NSpid: 696 1 NSpgid: 696 1 NSsid: 696 1 VmPeak: 1285448 kB VmSize: 1285448 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 46832 kB VmRSS: 27284 kB RssAnon: 15824 kB RssFile: 11460 kB RssShmem: 0 kB VmData: 72080 kB VmStk: 132 kB VmExe: 25108 kB VmLib: 8 kB VmPTE: 264 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 42 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80435fb CapEff: 00000000a80435fb CapBnd: 00000000a80435fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 264029 nonvoluntary_ctxt_switches: 8707 [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "kindnetd", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "696", "Ngid" => "569898", "Pid" => "696", "PPid" => "401", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "696\t1", "NSpid" => "696\t1", "NSpgid" => "696\t1", "NSsid" => "696\t1", "VmPeak" => "1285448 kB", "VmSize" => "1285448 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "46832 kB", "VmRSS" => "27284 kB", "RssAnon" => "15824 kB", "RssFile" => "11460 kB", "RssShmem" => "0 kB", "VmData" => "72080 kB", "VmStk" => "132 kB", "VmExe" => "25108 kB", "VmLib" => "8 kB", "VmPTE" => "264 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "42", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80435fb", "CapEff" => "00000000a80435fb", "CapBnd" => "00000000a80435fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "264029", "nonvoluntary_ctxt_switches" => "8707"} [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:23] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 804 Ngid: 0 Pid: 804 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 804 NSpid: 804 NSpgid: 804 NSsid: 196 VmPeak: 1233548 kB VmSize: 1233548 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 10624 kB VmRSS: 10400 kB RssAnon: 3220 kB RssFile: 7180 kB RssShmem: 0 kB VmData: 40760 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 104 kB VmSwap: 4 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 13 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 10 nonvoluntary_ctxt_switches: 0 [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "804", "Ngid" => "0", "Pid" => "804", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "804", "NSpid" => "804", "NSpgid" => "804", "NSsid" => "196", "VmPeak" => "1233548 kB", "VmSize" => "1233548 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "10624 kB", "VmRSS" => "10400 kB", "RssAnon" => "3220 kB", "RssFile" => "7180 kB", "RssShmem" => "0 kB", "VmData" => "40760 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "104 kB", "VmSwap" => "4 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "13", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "10", "nonvoluntary_ctxt_switches" => "0"} [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:23] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 829 Ngid: 0 Pid: 829 PPid: 804 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 829 1 NSpid: 829 1 NSpgid: 829 1 NSsid: 829 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 24 nonvoluntary_ctxt_switches: 7 [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "829", "Ngid" => "0", "Pid" => "829", "PPid" => "804", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "829\t1", "NSpid" => "829\t1", "NSpgid" => "829\t1", "NSsid" => "829\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "24", "nonvoluntary_ctxt_switches" => "7"} [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:23] DEBUG -- CNTI: parse_status status_output: Name: sh Umask: 0022 State: S (sleeping) Tgid: 863 Ngid: 0 Pid: 863 PPid: 804 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 1 2 3 4 6 10 11 20 26 27 NStgid: 863 1 NSpid: 863 1 NSpgid: 863 1 NSsid: 863 1 VmPeak: 3552 kB VmSize: 1564 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 1036 kB VmRSS: 84 kB RssAnon: 80 kB RssFile: 4 kB RssShmem: 0 kB VmData: 52 kB VmStk: 132 kB VmExe: 788 kB VmLib: 556 kB VmPTE: 44 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000004 SigCgt: 0000000000010002 CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 922 nonvoluntary_ctxt_switches: 6 [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sh", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "863", "Ngid" => "0", "Pid" => "863", "PPid" => "804", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0 1 2 3 4 6 10 11 20 26 27", "NStgid" => "863\t1", "NSpid" => "863\t1", "NSpgid" => "863\t1", "NSsid" => "863\t1", "VmPeak" => "3552 kB", "VmSize" => "1564 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "1036 kB", "VmRSS" => "84 kB", "RssAnon" => "80 kB", "RssFile" => "4 kB", "RssShmem" => "0 kB", "VmData" => "52 kB", "VmStk" => "132 kB", "VmExe" => "788 kB", "VmLib" => "556 kB", "VmPTE" => "44 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000004", "SigCgt" => "0000000000010002", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "922", "nonvoluntary_ctxt_switches" => "6"} [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: proctree: [{"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "2431990", "Ngid" => "0", "Pid" => "2431990", "PPid" => "2430296", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "2431990\t1", "NSpid" => "2431990\t1", "NSpgid" => "2431990\t1", "NSsid" => "2431990\t1", "VmPeak" => "747724 kB", "VmSize" => "747724 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "38412 kB", "VmRSS" => "38412 kB", "RssAnon" => "10484 kB", "RssFile" => "27928 kB", "RssShmem" => "0 kB", "VmData" => "107912 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "184 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "15", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "546", "nonvoluntary_ctxt_switches" => "12", "cmdline" => "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000"}] [2025-07-10 11:58:23] DEBUG -- CNTI-proctree_by_pid: [2025-07-10 11:58:23] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:23] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:23] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:23] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:23] INFO -- CNTI-KubectlClient.Utils.exec_bg: Exec background command in pod cluster-tools-xv7rs [2025-07-10 11:58:23] DEBUG -- CNTI: ClusterTools exec: {process: #), @wait_count=2, @channel=#>, output: "", error: ""} [2025-07-10 11:58:24] DEBUG -- CNTI: Time left: 9 seconds [2025-07-10 11:58:24] INFO -- CNTI-sig_term_handled: Attached strace to PIDs: 2431990 [2025-07-10 11:58:27] INFO -- CNTI: exec_by_node: Called with JSON [2025-07-10 11:58:27] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-07-10 11:58:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-07-10 11:58:27] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-xv7rs [2025-07-10 11:58:27] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-xv7rs [2025-07-10 11:58:27] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-xv7rs [2025-07-10 11:58:32] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} ✔️ 🏆PASSED: [sig_term_handled] Sig Term handled ⚖👀 Microservice results: 2 of 4 tests passed  Reliability, Resilience, and Availability Tests [2025-07-10 11:58:35] INFO -- CNTI-sig_term_handled: PID 2431990 => SIGTERM captured? true [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'sig_term_handled' emoji: ⚖👀 [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'sig_term_handled' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points: Task: 'sig_term_handled' type: essential [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'sig_term_handled' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points: Task: 'sig_term_handled' type: essential [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.upsert_task-sig_term_handled: Task start time: 2025-07-10 11:58:08 UTC, end time: 2025-07-10 11:58:35 UTC [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.upsert_task-sig_term_handled: Task: 'sig_term_handled' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:27.306886240 [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled"] for tags: ["microservice", "cert"] [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["microservice", "cert"] [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 400, max tasks passed: 4 for tags: ["microservice", "cert"] [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled"] for tags: ["microservice", "cert"] [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["microservice", "cert"] [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 400, max tasks passed: 4 for tags: ["microservice", "cert"] [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1300, total tasks passed: 13 for tags: ["essential"] [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:58:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 400} [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:58:36] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:58:36] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:58:36] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:58:36] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:58:36] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [liveness] [2025-07-10 11:58:36] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Task.task_runner.liveness: Starting test [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-07-10 11:58:36] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-07-10 11:58:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:58:36] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-07-10 11:58:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns ✔️ 🏆PASSED: [liveness] Helm liveness probe found ⎈🧫 [2025-07-10 11:58:36] INFO -- CNTI-liveness: Resource Deployment/coredns-coredns passed liveness?: true [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.workload_resource_test: Container result: true [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-07-10 11:58:36] INFO -- CNTI-liveness: Workload resource task response: true [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'liveness' emoji: ⎈🧫 [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'liveness' tags: ["resilience", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points: Task: 'liveness' type: essential [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'liveness' tags: ["resilience", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points: Task: 'liveness' type: essential [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.upsert_task-liveness: Task start time: 2025-07-10 11:58:36 UTC, end time: 2025-07-10 11:58:36 UTC [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.upsert_task-liveness: Task: 'liveness' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.261257526 [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:58:36] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-07-10 11:58:36] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-07-10 11:58:36] INFO -- CNTI: check_cnf_config args: # [2025-07-10 11:58:36] INFO -- CNTI: check_cnf_config cnf: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-07-10 11:58:36] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [readiness] [2025-07-10 11:58:36] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Task.task_runner.readiness: Starting test [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-07-10 11:58:36] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-07-10 11:58:36] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-07-10 11:58:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-07-10 11:58:36] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-07-10 11:58:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns ✔️ 🏆PASSED: [readiness] Helm readiness probe found ⎈🧫 Reliability, resilience, and availability results: 2 of 2 tests passed  RESULTS SUMMARY  - 15 of 18 total tests passed  - 15 of 18 essential tests passed Results have been saved to results/cnf-testsuite-results-20250710-115332-086.yml [2025-07-10 11:58:36] DEBUG -- CNTI-readiness: coredns [2025-07-10 11:58:36] INFO -- CNTI-readiness: Resource Deployment/coredns-coredns passed liveness?: true [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.workload_resource_test: Container result: true [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-07-10 11:58:36] INFO -- CNTI-readiness: Workload resource task response: true [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'readiness' emoji: ⎈🧫 [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'readiness' tags: ["resilience", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points: Task: 'readiness' type: essential [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'readiness' tags: ["resilience", "dynamic", "workload", "cert", "essential"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points: Task: 'readiness' type: essential [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.upsert_task-readiness: Task start time: 2025-07-10 11:58:36 UTC, end time: 2025-07-10 11:58:36 UTC [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.upsert_task-readiness: Task: 'readiness' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.253394492 [2025-07-10 11:58:36] DEBUG -- CNTI: resilience [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["liveness", "readiness"] for tags: ["resilience", "cert"] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["resilience", "cert"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 200, max tasks passed: 2 for tags: ["resilience", "cert"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["liveness", "readiness"] for tags: ["resilience", "cert"] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["resilience", "cert"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 200, max tasks passed: 2 for tags: ["resilience", "cert"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1500, total tasks passed: 15 for tags: ["essential"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 200} [2025-07-10 11:58:36] DEBUG -- CNTI: cert [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["cert"] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1500, total tasks passed: 15 for tags: ["cert"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["cert"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["cert"] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1500, total tasks passed: 15 for tags: ["cert"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["cert"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1500, total tasks passed: 15 for tags: ["essential"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-07-10 11:58:36] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-07-10 11:58:36] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 200} [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 200} [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 200} [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800} [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800} [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800, "total_passed" => "15 of 18"} [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800, "total_passed" => "15 of 18"} [2025-07-10 11:58:36] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800, "total_passed" => "15 of 18", "essential_passed" => "15 of 18"} [2025-07-10 11:58:36] INFO -- CNTI: results yaml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800, "total_passed" => "15 of 18", "essential_passed" => "15 of 18"} 2025-07-10 11:58:37,040 - functest_kubernetes.cnf_conformance.conformance - WARNING - non_root_containers failed 2025-07-10 11:58:37,040 - functest_kubernetes.cnf_conformance.conformance - WARNING - specialized_init_system failed 2025-07-10 11:58:37,040 - functest_kubernetes.cnf_conformance.conformance - WARNING - zombie_handled failed 2025-07-10 11:58:37,043 - functest_kubernetes.cnf_conformance.conformance - INFO - +-------------------------------------------------------------+----------------+ | NAME | STATUS | +-------------------------------------------------------------+----------------+ | increase_decrease_capacity | passed | | node_drain | passed | | privileged_containers | passed | | non_root_containers | failed | | cpu_limits | passed | | memory_limits | passed | | hostpath_mounts | passed | | container_sock_mounts | passed | | selinux_options | na | | hostport_not_used | passed | | hardcoded_ip_addresses_in_k8s_runtime_configuration | passed | | latest_tag | passed | | log_output | passed | | specialized_init_system | failed | | single_process_type | passed | | zombie_handled | failed | | sig_term_handled | passed | | liveness | passed | | readiness | passed | +-------------------------------------------------------------+----------------+ 2025-07-10 11:58:37,165 - xtesting.ci.run_tests - INFO - Test result: +-----------------------+------------------+------------------+----------------+ | TEST CASE | PROJECT | DURATION | RESULT | +-----------------------+------------------+------------------+----------------+ | cnf_testsuite | functest | 05:35 | PASS | +-----------------------+------------------+------------------+----------------+ 2025-07-10 11:58:37,499 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite cnf_uninstall cnf-config=example-cnfs/coredns/cnf-testsuite.yml Successfully uninstalled helm deployment "coredns". All CNF deployments were uninstalled, some time might be needed for all resources to be down. 2025-07-10 11:58:50,431 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite uninstall_all cnf-config=example-cnfs/coredns/cnf-testsuite.yml CNF uninstallation skipped. No CNF config found in installed_cnf_files directory.  Uninstalling testsuite helper tools. Testsuite helper tools uninstalled.