2025-06-12 23:11:01,809 - xtesting.ci.run_tests - INFO - Deployment description: +-------------------------+------------------------------------------------------------+ | ENV VAR | VALUE | +-------------------------+------------------------------------------------------------+ | CI_LOOP | daily | | DEBUG | false | | DEPLOY_SCENARIO | k8-nosdn-nofeature-noha | | INSTALLER_TYPE | unknown | | BUILD_TAG | 0LLBVVTQKB4S | | NODE_NAME | v1.31 | | TEST_DB_URL | http://testresults.opnfv.org/test/api/v1/results | | TEST_DB_EXT_URL | http://testresults.opnfv.org/test/api/v1/results | | S3_ENDPOINT_URL | https://storage.googleapis.com | | S3_DST_URL | s3://artifacts.opnfv.org/functest- | | | kubernetes/0LLBVVTQKB4S/functest-kubernetes-opnfv- | | | functest-kubernetes-cnf-v1.31-cnf_testsuite-run-37 | | HTTP_DST_URL | http://artifacts.opnfv.org/functest- | | | kubernetes/0LLBVVTQKB4S/functest-kubernetes-opnfv- | | | functest-kubernetes-cnf-v1.31-cnf_testsuite-run-37 | +-------------------------+------------------------------------------------------------+ 2025-06-12 23:11:01,824 - xtesting.ci.run_tests - INFO - Loading test case 'cnf_testsuite'... 2025-06-12 23:11:02,181 - xtesting.ci.run_tests - INFO - Running test case 'cnf_testsuite'... 2025-06-12 23:11:13,418 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite setup -l debug CNF TestSuite version: v1.4.4 Successfully created directories for cnf-testsuite [2025-06-12 23:11:02] INFO -- CNTI: VERSION: v1.4.4 [2025-06-12 23:11:02] INFO -- CNTI-Setup.cnf_directory_setup: Creating directories for CNTI testsuite [2025-06-12 23:11:02] DEBUG -- CNTI: helm_local_install [2025-06-12 23:11:02] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-06-12 23:11:02] INFO -- CNTI: Globally installed helm satisfies required version. Skipping local helm install. Global helm found. Version: v3.17.0 [2025-06-12 23:11:02] DEBUG -- CNTI: helm_v2?: [2025-06-12 23:11:02] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-06-12 23:11:02] DEBUG -- CNTI-Helm.helm_local_response.cmd: command: /home/xtesting/.cnf-testsuite/tools/helm/linux-amd64/helm version No Local helm version found [2025-06-12 23:11:02] WARN -- CNTI-Helm.helm_local_response.cmd: stderr: sh: line 0: /home/xtesting/.cnf-testsuite/tools/helm/linux-amd64/helm: not found [2025-06-12 23:11:02] DEBUG -- CNTI: helm_v2?: [2025-06-12 23:11:02] DEBUG -- CNTI: helm_v3?: [2025-06-12 23:11:02] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-06-12 23:11:02] DEBUG -- CNTI-Helm.helm_gives_k8s_warning?.cmd: command: helm list Global kubectl found. Version: 1.31 No Local kubectl version found Global git found. Version: 2.45.3 No Local git version found All prerequisites found. KUBECONFIG is set as /home/xtesting/.kube/config. [2025-06-12 23:11:02] INFO -- CNTI-Setup.create_namespace: Creating namespace for CNTI testsuite [2025-06-12 23:11:02] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-06-12 23:11:02] INFO -- CNTI-KubectlClient.Apply.namespace: Create a namespace: cnf-testsuite Created cnf-testsuite namespace on the Kubernetes cluster [2025-06-12 23:11:03] INFO -- CNTI-Setup.create_namespace: cnf-testsuite namespace created [2025-06-12 23:11:03] INFO -- CNTI-KubectlClient.Utils.label: Label namespace/cnf-testsuite with pod-security.kubernetes.io/enforce=privileged [2025-06-12 23:11:03] INFO -- CNTI-Setup.configuration_file_setup: Creating configuration file [2025-06-12 23:11:03] DEBUG -- CNTI: install_apisnoop [2025-06-12 23:11:03] INFO -- CNTI: GitClient.clone command: https://github.com/cncf/apisnoop /home/xtesting/.cnf-testsuite/tools/apisnoop [2025-06-12 23:11:10] INFO -- CNTI: GitClient.clone output: [2025-06-12 23:11:10] INFO -- CNTI: GitClient.clone stderr: Cloning into '/home/xtesting/.cnf-testsuite/tools/apisnoop'... [2025-06-12 23:11:10] INFO -- CNTI: url: https://github.com/vmware-tanzu/sonobuoy/releases/download/v0.56.14/sonobuoy_0.56.14_linux_amd64.tar.gz [2025-06-12 23:11:10] INFO -- CNTI: write_file: /home/xtesting/.cnf-testsuite/tools/sonobuoy/sonobuoy.tar.gz [2025-06-12 23:11:10] DEBUG -- CNTI-http.client: Performing request [2025-06-12 23:11:11] DEBUG -- CNTI-http.client: Performing request [2025-06-12 23:11:12] DEBUG -- CNTI: Sonobuoy Version: v0.56.14 MinimumKubeVersion: 1.17.0 MaximumKubeVersion: 1.99.99 GitSHA: bd5465d6b2b2b92b517f4c6074008d22338ff509 GoVersion: go1.19.4 Platform: linux/amd64 API Version check skipped due to missing `--kubeconfig` or other error [2025-06-12 23:11:12] INFO -- CNTI: install_kind [2025-06-12 23:11:12] INFO -- CNTI: write_file: /home/xtesting/.cnf-testsuite/tools/kind/kind [2025-06-12 23:11:12] INFO -- CNTI: install kind [2025-06-12 23:11:12] INFO -- CNTI: url: https://github.com/kubernetes-sigs/kind/releases/download/v0.27.0/kind-linux-amd64 [2025-06-12 23:11:12] DEBUG -- CNTI-http.client: Performing request [2025-06-12 23:11:12] DEBUG -- CNTI-http.client: Performing request Dependency installation complete 2025-06-12 23:12:05,817 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite cnf_install cnf-config=example-cnfs/coredns/cnf-testsuite.yml -l debug Successfully created directories for cnf-testsuite [2025-06-12 23:11:13] INFO -- CNTI-Setup.cnf_directory_setup: Creating directories for CNTI testsuite [2025-06-12 23:11:13] DEBUG -- CNTI: helm_local_install KUBECONFIG is set as /home/xtesting/.kube/config. [2025-06-12 23:11:13] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-06-12 23:11:13] INFO -- CNTI: Globally installed helm satisfies required version. Skipping local helm install. [2025-06-12 23:11:13] INFO -- CNTI-Setup.create_namespace: Creating namespace for CNTI testsuite [2025-06-12 23:11:13] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-06-12 23:11:13] INFO -- CNTI-KubectlClient.Apply.namespace: Create a namespace: cnf-testsuite cnf-testsuite namespace already exists on the Kubernetes cluster [2025-06-12 23:11:13] WARN -- CNTI-KubectlClient.Apply.namespace.cmd: stderr: Error from server (AlreadyExists): namespaces "cnf-testsuite" already exists [2025-06-12 23:11:13] INFO -- CNTI-Setup.create_namespace: cnf-testsuite namespace already exists, not creating [2025-06-12 23:11:13] INFO -- CNTI-KubectlClient.Utils.label: Label namespace/cnf-testsuite with pod-security.kubernetes.io/enforce=privileged [2025-06-12 23:11:13] INFO -- CNTI-Setup.cnf_install: Installing CNF to cluster [2025-06-12 23:11:13] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:11:13] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:11:13] DEBUG -- CNTI: find output: [2025-06-12 23:11:13] WARN -- CNTI: find stderr: find: installed_cnf_files/*: No such file or directory [2025-06-12 23:11:13] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: [] [2025-06-12 23:11:13] INFO -- CNTI: ClusterTools install [2025-06-12 23:11:13] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource namespaces [2025-06-12 23:11:13] DEBUG -- CNTI: ClusterTools ensure_namespace_exists namespace_array: [{"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-12T23:11:03Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-testsuite", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-testsuite", "resourceVersion" => "420438", "uid" => "9b5c345e-7ef3-4138-b73e-f56b4a29c1f7"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "default"}, "name" => "default", "resourceVersion" => "18", "uid" => "6540a096-e272-41d8-a161-386e574f329f"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-12T23:02:19Z", "deletionTimestamp" => "2025-06-12T23:10:59Z", "generateName" => "ims-", "labels" => {"kubernetes.io/metadata.name" => "ims-hffr5", "pod-security.kubernetes.io/enforce" => "baseline"}, "name" => "ims-hffr5", "resourceVersion" => "420589", "uid" => "01557c7e-ba87-4ad5-8344-e0e5f6f1a467"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"conditions" => [{"lastTransitionTime" => "2025-06-12T23:11:04Z", "message" => "All resources successfully discovered", "reason" => "ResourcesDiscovered", "status" => "False", "type" => "NamespaceDeletionDiscoveryFailure"}, {"lastTransitionTime" => "2025-06-12T23:11:04Z", "message" => "All legacy kube types successfully parsed", "reason" => "ParsedGroupVersions", "status" => "False", "type" => "NamespaceDeletionGroupVersionParsingFailure"}, {"lastTransitionTime" => "2025-06-12T23:11:04Z", "message" => "All content successfully deleted, may be waiting on finalization", "reason" => "ContentDeleted", "status" => "False", "type" => "NamespaceDeletionContentFailure"}, {"lastTransitionTime" => "2025-06-12T23:11:04Z", "message" => "Some resources are remaining: pods. has 6 resource instances", "reason" => "SomeResourcesRemain", "status" => "True", "type" => "NamespaceContentRemaining"}, {"lastTransitionTime" => "2025-06-12T23:11:04Z", "message" => "All content-preserving finalizers finished", "reason" => "ContentHasNoFinalizers", "status" => "False", "type" => "NamespaceFinalizersRemaining"}], "phase" => "Terminating"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "kube-node-lease"}, "name" => "kube-node-lease", "resourceVersion" => "25", "uid" => "3bf69b14-e04e-47c2-b401-01ac67e2b525"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "kube-public"}, "name" => "kube-public", "resourceVersion" => "11", "uid" => "bf9dde1e-d213-4b9b-a76e-2331e0268f98"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "kube-system"}, "name" => "kube-system", "resourceVersion" => "4", "uid" => "aca03ac4-602a-479e-9465-c3fc642d9935"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"local-path-storage\"}}\n"}, "creationTimestamp" => "2025-06-10T13:23:51Z", "labels" => {"kubernetes.io/metadata.name" => "local-path-storage"}, "name" => "local-path-storage", "resourceVersion" => "281", "uid" => "56adfc2f-0846-4aa8-b7ec-112037d8ba61"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}] [2025-06-12 23:11:13] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file cluster_tools.yml [2025-06-12 23:11:14] WARN -- CNTI-KubectlClient.Apply.file.cmd: stderr: Warning: would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), privileged (container "cluster-tools" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "cluster-tools" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "cluster-tools" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "proc", "systemd", "hostfs" use restricted volume type "hostPath"), runAsNonRoot != true (pod or container "cluster-tools" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "cluster-tools" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") [2025-06-12 23:11:14] INFO -- CNTI: ClusterTools wait_for_cluster_tools [2025-06-12 23:11:14] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource namespaces [2025-06-12 23:11:14] DEBUG -- CNTI: ClusterTools ensure_namespace_exists namespace_array: [{"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-12T23:11:03Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-testsuite", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-testsuite", "resourceVersion" => "420438", "uid" => "9b5c345e-7ef3-4138-b73e-f56b4a29c1f7"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "default"}, "name" => "default", "resourceVersion" => "18", "uid" => "6540a096-e272-41d8-a161-386e574f329f"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-12T23:02:19Z", "deletionTimestamp" => "2025-06-12T23:10:59Z", "generateName" => "ims-", "labels" => {"kubernetes.io/metadata.name" => "ims-hffr5", "pod-security.kubernetes.io/enforce" => "baseline"}, "name" => "ims-hffr5", "resourceVersion" => "420589", "uid" => "01557c7e-ba87-4ad5-8344-e0e5f6f1a467"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"conditions" => [{"lastTransitionTime" => "2025-06-12T23:11:04Z", "message" => "All resources successfully discovered", "reason" => "ResourcesDiscovered", "status" => "False", "type" => "NamespaceDeletionDiscoveryFailure"}, {"lastTransitionTime" => "2025-06-12T23:11:04Z", "message" => "All legacy kube types successfully parsed", "reason" => "ParsedGroupVersions", "status" => "False", "type" => "NamespaceDeletionGroupVersionParsingFailure"}, {"lastTransitionTime" => "2025-06-12T23:11:04Z", "message" => "All content successfully deleted, may be waiting on finalization", "reason" => "ContentDeleted", "status" => "False", "type" => "NamespaceDeletionContentFailure"}, {"lastTransitionTime" => "2025-06-12T23:11:04Z", "message" => "Some resources are remaining: pods. has 6 resource instances", "reason" => "SomeResourcesRemain", "status" => "True", "type" => "NamespaceContentRemaining"}, {"lastTransitionTime" => "2025-06-12T23:11:04Z", "message" => "All content-preserving finalizers finished", "reason" => "ContentHasNoFinalizers", "status" => "False", "type" => "NamespaceFinalizersRemaining"}], "phase" => "Terminating"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "kube-node-lease"}, "name" => "kube-node-lease", "resourceVersion" => "25", "uid" => "3bf69b14-e04e-47c2-b401-01ac67e2b525"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "kube-public"}, "name" => "kube-public", "resourceVersion" => "11", "uid" => "bf9dde1e-d213-4b9b-a76e-2331e0268f98"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "kube-system"}, "name" => "kube-system", "resourceVersion" => "4", "uid" => "aca03ac4-602a-479e-9465-c3fc642d9935"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"local-path-storage\"}}\n"}, "creationTimestamp" => "2025-06-10T13:23:51Z", "labels" => {"kubernetes.io/metadata.name" => "local-path-storage"}, "name" => "local-path-storage", "resourceVersion" => "281", "uid" => "56adfc2f-0846-4aa8-b7ec-112037d8ba61"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}] [2025-06-12 23:11:14] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Waiting for resource Daemonset/cluster-tools to install [2025-06-12 23:11:14] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:14] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:14] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:14] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: seconds elapsed while waiting: 0 [2025-06-12 23:11:15] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:15] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:15] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:16] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:16] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:16] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:17] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:17] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:17] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:18] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:18] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:18] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:19] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:19] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:19] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:20] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:20] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:20] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:22] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:22] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:23] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:23] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:23] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:24] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:24] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:24] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:25] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:25] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:25] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: seconds elapsed while waiting: 10 [2025-06-12 23:11:26] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:26] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:27] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:27] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:28] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:28] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:28] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:29] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:29] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:29] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:30] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:30] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:30] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:32] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:32] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:32] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:33] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:33] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:33] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:34] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:34] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:34] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:35] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:35] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:35] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:36] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:36] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:36] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: seconds elapsed while waiting: 20 [2025-06-12 23:11:37] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:37] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:37] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:38] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:38] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:38] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:39] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:39] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:39] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:40] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:40] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:40] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:42] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:42] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:42] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:11:43] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:11:43] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:11:43] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools ClusterTools installed CNF installation start. Installing deployment "coredns". [2025-06-12 23:11:43] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Daemonset/cluster-tools is ready [2025-06-12 23:11:43] DEBUG -- CNTI-CNFInstall.parsed_cli_args: Parsed args: {config_path: "example-cnfs/coredns/cnf-testsuite.yml", timeout: 1800, skip_wait_for_install: false} [2025-06-12 23:11:43] INFO -- CNTI-Helm.helm_repo_add: Adding helm repository: stable [2025-06-12 23:11:43] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-06-12 23:11:43] DEBUG -- CNTI-Helm.helm_repo_add.cmd: command: helm repo add stable https://cncf.gitlab.io/stable [2025-06-12 23:11:43] INFO -- CNTI-Helm.pull: Pulling helm chart: stable/coredns [2025-06-12 23:11:43] DEBUG -- CNTI-Helm.pull.cmd: command: helm pull stable/coredns --untar --destination installed_cnf_files/deployments/coredns [2025-06-12 23:11:44] INFO -- CNTI-CNFManager.ensure_namespace_exists!: Ensure that namespace: cnf-default exists on the cluster for the CNF install [2025-06-12 23:11:44] INFO -- CNTI-KubectlClient.Apply.namespace: Create a namespace: cnf-default [2025-06-12 23:11:44] INFO -- CNTI-KubectlClient.Utils.label: Label namespace/cnf-default with pod-security.kubernetes.io/enforce=privileged [2025-06-12 23:11:44] INFO -- CNTI-Helm.install: Installing helm chart: installed_cnf_files/deployments/coredns/coredns [2025-06-12 23:11:44] DEBUG -- CNTI-Helm.install: Values: [2025-06-12 23:11:44] DEBUG -- CNTI-Helm.install.cmd: command: helm install coredns installed_cnf_files/deployments/coredns/coredns -n cnf-default [2025-06-12 23:11:45] WARN -- CNTI-Helm.install.cmd: stderr: W0612 23:11:45.388259 920 warnings.go:70] spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead W0612 23:11:45.388321 920 warnings.go:70] would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "coredns" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "coredns" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "coredns" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "coredns" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") [2025-06-12 23:11:45] INFO -- CNTI-Helm.generate_manifest: Generating manifest from installed CNF: coredns [2025-06-12 23:11:45] DEBUG -- CNTI-Helm.cmd: command: helm get manifest coredns --namespace cnf-default [2025-06-12 23:11:45] INFO -- CNTI-Helm.generate_manifest: Manifest was generated successfully [2025-06-12 23:11:45] INFO -- CNTI-CNFInstall.add_namespace_to_resources: Updating metadata.namespace field for resources in generated manifest Waiting for resource for "coredns" deployment (1/1): [Deployment] coredns-coredns [2025-06-12 23:11:45] DEBUG -- CNTI-CNFInstall.add_namespace_to_resources: Added cnf-default namespace for resource: {kind: ConfigMap, name: coredns-coredns} [2025-06-12 23:11:45] DEBUG -- CNTI-CNFInstall.add_namespace_to_resources: Added cnf-default namespace for resource: {kind: Service, name: coredns-coredns} [2025-06-12 23:11:45] DEBUG -- CNTI-CNFInstall.add_namespace_to_resources: Added cnf-default namespace for resource: {kind: Deployment, name: coredns-coredns} [2025-06-12 23:11:45] DEBUG -- CNTI-CNFInstall.add_manifest_to_file: coredns manifest was appended into installed_cnf_files/deployments/coredns/deployment_manifest.yml file [2025-06-12 23:11:45] DEBUG -- CNTI-CNFInstall.add_manifest_to_file: coredns manifest was appended into installed_cnf_files/common_manifest.yml file [2025-06-12 23:11:45] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "ConfigMap", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "ClusterRole", name: "coredns-coredns", namespace: "default"}, {kind: "ClusterRoleBinding", name: "coredns-coredns", namespace: "default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:11:45] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Waiting for resource Deployment/coredns-coredns to install [2025-06-12 23:11:45] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:11:45] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:11:45] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:11:45] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: seconds elapsed while waiting: 0 [2025-06-12 23:11:46] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:11:46] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:11:46] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:11:47] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:11:47] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:11:47] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:11:48] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:11:48] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:11:48] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:11:50] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:11:50] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:11:50] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:11:51] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:11:51] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:11:51] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:11:52] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:11:52] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:11:52] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:11:53] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:11:53] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:11:53] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:11:54] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:11:54] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:11:54] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:11:55] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:11:55] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:11:55] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:11:56] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:11:56] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:11:56] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:11:56] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: seconds elapsed while waiting: 10 [2025-06-12 23:11:57] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:11:57] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:11:57] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:11:59] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:11:59] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:11:59] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:12:00] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:12:00] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:12:00] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:12:01] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:12:01] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:12:01] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:12:02] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:12:02] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:12:02] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:12:03] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:12:03] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:12:03] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:12:04] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:12:04] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:12:04] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:12:05] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Deployment/coredns-coredns is ready [2025-06-12 23:12:05] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Deployment/coredns-coredns [2025-06-12 23:12:05] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns All "coredns" deployment resources are up. CNF installation complete. [2025-06-12 23:12:05] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Deployment/coredns-coredns is ready [2025-06-12 23:12:05] INFO -- CNTI-Setup.cnf_install: CNF installed successfuly 2025-06-12 23:16:49,678 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite cert -l debug CNF TestSuite version: v1.4.4 Compatibility, Installability & Upgradability Tests [2025-06-12 23:12:05] INFO -- CNTI: VERSION: v1.4.4 [2025-06-12 23:12:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-06-12 23:12:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:12:05] DEBUG -- CNTI-CNFManager.Points.Results.file: Results file created: results/cnf-testsuite-results-20250612-231205-834.yml [2025-06-12 23:12:05] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:12:05] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:12:05] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:12:05] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:12:05] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:12:05] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:12:05] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:12:05] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:12:05] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [increase_decrease_capacity] [2025-06-12 23:12:05] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:12:05] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:12:05] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:12:05] INFO -- CNTI-CNFManager.Task.task_runner.increase_decrease_capacity: Starting test [2025-06-12 23:12:05] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:12:05] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:05] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:12:05] INFO -- CNTI-change_capacity:resource: Deployment/coredns-coredns; namespace: cnf-default [2025-06-12 23:12:05] INFO -- CNTI-change_capacity:capacity: Base replicas: 1; Target replicas: 3 [2025-06-12 23:12:05] INFO -- CNTI-KubectlClient.Utils.scale: Scale Deployment/coredns-coredns to 1 replicas [2025-06-12 23:12:05] DEBUG -- CNTI: target_replica_count: 1 [2025-06-12 23:12:06] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-06-12 23:12:06] DEBUG -- CNTI: Deployment initialized to 1 [2025-06-12 23:12:06] INFO -- CNTI-KubectlClient.Utils.scale: Scale Deployment/coredns-coredns to 3 replicas [2025-06-12 23:12:06] WARN -- CNTI-KubectlClient.Utils.scale.cmd: stderr: Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead [2025-06-12 23:12:06] DEBUG -- CNTI: target_replica_count: 3 [2025-06-12 23:12:06] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-06-12 23:12:08] DEBUG -- CNTI: Time left: 58 seconds [2025-06-12 23:12:08] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-06-12 23:12:10] DEBUG -- CNTI: Time left: 56 seconds [2025-06-12 23:12:10] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-06-12 23:12:12] DEBUG -- CNTI: Time left: 54 seconds [2025-06-12 23:12:12] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-06-12 23:12:14] DEBUG -- CNTI: Time left: 52 seconds [2025-06-12 23:12:14] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-06-12 23:12:16] DEBUG -- CNTI: Time left: 50 seconds [2025-06-12 23:12:16] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-06-12 23:12:19] DEBUG -- CNTI: Time left: 48 seconds [2025-06-12 23:12:19] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-06-12 23:12:21] DEBUG -- CNTI: Time left: 46 seconds [2025-06-12 23:12:21] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-06-12 23:12:23] DEBUG -- CNTI: Time left: 44 seconds [2025-06-12 23:12:23] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-06-12 23:12:25] DEBUG -- CNTI: Time left: 41 seconds [2025-06-12 23:12:25] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-06-12 23:12:27] DEBUG -- CNTI: Time left: 39 seconds [2025-06-12 23:12:27] DEBUG -- CNTI: current_replicas before get Deployment: 1 [2025-06-12 23:12:29] DEBUG -- CNTI: Time left: 58 seconds [2025-06-12 23:12:29] DEBUG -- CNTI: current_replicas before get Deployment: 3 [2025-06-12 23:12:29] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:12:29] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:29] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:12:29] INFO -- CNTI-change_capacity:resource: Deployment/coredns-coredns; namespace: cnf-default [2025-06-12 23:12:29] INFO -- CNTI-change_capacity:capacity: Base replicas: 3; Target replicas: 1 [2025-06-12 23:12:29] INFO -- CNTI-KubectlClient.Utils.scale: Scale Deployment/coredns-coredns to 3 replicas [2025-06-12 23:12:29] DEBUG -- CNTI: target_replica_count: 3 [2025-06-12 23:12:29] DEBUG -- CNTI: current_replicas before get Deployment: 3 [2025-06-12 23:12:30] DEBUG -- CNTI: Deployment initialized to 3 [2025-06-12 23:12:30] INFO -- CNTI-KubectlClient.Utils.scale: Scale Deployment/coredns-coredns to 1 replicas [2025-06-12 23:12:30] WARN -- CNTI-KubectlClient.Utils.scale.cmd: stderr: Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead [2025-06-12 23:12:30] DEBUG -- CNTI: target_replica_count: 1 [2025-06-12 23:12:30] DEBUG -- CNTI: current_replicas before get Deployment: 1 ✔️ 🏆PASSED: [increase_decrease_capacity] Replicas increased to 3 and decreased to 1 📦📈📉 Compatibility, installability, and upgradeability results: 1 of 1 tests passed  State Tests [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'increase_decrease_capacity' emoji: 📦📈📉 [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'increase_decrease_capacity' tags: ["compatibility", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points: Task: 'increase_decrease_capacity' type: essential [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'increase_decrease_capacity' tags: ["compatibility", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points: Task: 'increase_decrease_capacity' type: essential [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.upsert_task-increase_decrease_capacity: Task start time: 2025-06-12 23:12:05 UTC, end time: 2025-06-12 23:12:30 UTC [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.upsert_task-increase_decrease_capacity: Task: 'increase_decrease_capacity' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:24.510299235 [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["increase_decrease_capacity"] for tags: ["compatibility", "cert"] [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["compatibility", "cert"] [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["compatibility", "cert"] [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["increase_decrease_capacity"] for tags: ["compatibility", "cert"] [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["compatibility", "cert"] [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["cni_compatible", "increase_decrease_capacity", "rolling_update", "rolling_downgrade", "rolling_version_change", "rollback", "deprecated_k8s_features", "helm_deploy", "helm_chart_valid", "helm_chart_published"] for tag: compatibility [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["compatibility", "cert"] [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["essential"] [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: selinux_options is worth: 100 points [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-06-12 23:12:30] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1900, max tasks passed: 19 for tags: ["essential"] [2025-06-12 23:12:30] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => nil, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:12:30] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:12:30] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:12:30] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 100} [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-06-12 23:12:30] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:12:30] INFO -- CNTI: install litmus [2025-06-12 23:12:30] INFO -- CNTI-KubectlClient.Apply.namespace: Create a namespace: litmus [2025-06-12 23:12:30] INFO -- CNTI-Label.namespace: command: kubectl label namespace litmus pod-security.kubernetes.io/enforce=privileged [2025-06-12 23:12:30] DEBUG -- CNTI-Label.namespace: output: namespace/litmus labeled [2025-06-12 23:12:30] INFO -- CNTI: install litmus operator [2025-06-12 23:12:30] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file https://litmuschaos.github.io/litmus/litmus-operator-v3.6.0.yaml [2025-06-12 23:12:31] WARN -- CNTI-KubectlClient.Apply.file.cmd: stderr: Warning: resource namespaces/litmus is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "chaos-operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "chaos-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "chaos-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "chaos-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") [2025-06-12 23:12:31] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:12:31] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:12:31] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:12:31] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:12:31] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:12:31] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:12:31] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:12:31] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:12:31] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [node_drain] [2025-06-12 23:12:31] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:12:31] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:12:31] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:12:31] INFO -- CNTI-CNFManager.Task.task_runner.node_drain: Starting test [2025-06-12 23:12:31] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-06-12 23:12:31] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:12:31] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:12:31] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:12:31] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:12:31] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-06-12 23:12:31] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-06-12 23:12:31] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:12:31] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-06-12 23:12:31] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:12:31] INFO -- CNTI: Current Resource Name: Deployment/coredns-coredns Namespace: cnf-default [2025-06-12 23:12:31] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-06-12 23:12:31] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:12:32] DEBUG -- CNTI-KubectlClient.Get.schedulable_nodes_list: Retrieving list of schedulable nodes [2025-06-12 23:12:32] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-06-12 23:12:32] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-06-12 23:12:33] INFO -- CNTI-KubectlClient.Get.schedulable_nodes_list: Retrieved schedulable nodes list: v131-worker, v131-worker2 [2025-06-12 23:12:33] INFO -- CNTI: Getting the operator node name: kubectl get pods -l app.kubernetes.io/instance=coredns -n cnf-default -o=jsonpath='{.items[0].spec.nodeName}' [2025-06-12 23:12:33] DEBUG -- CNTI: status_code: 0 [2025-06-12 23:12:33] INFO -- CNTI: Found node to cordon v131-worker2 using label app.kubernetes.io/instance='coredns' in cnf-default namespace. [2025-06-12 23:12:33] INFO -- CNTI-KubectlClient.Utils.cordon: Cordon node v131-worker2 [2025-06-12 23:12:33] INFO -- CNTI: Cordoned node v131-worker2 successfully. [2025-06-12 23:12:33] DEBUG -- CNTI-node_drain: Getting the app node name kubectl get pods -l app.kubernetes.io/instance=coredns -n cnf-default -o=jsonpath='{.items[0].spec.nodeName}' [2025-06-12 23:12:33] DEBUG -- CNTI-node_drain: status_code: 0 [2025-06-12 23:12:33] DEBUG -- CNTI-node_drain: Getting the app node name kubectl get pods -n litmus -l app.kubernetes.io/name=litmus -o=jsonpath='{.items[0].spec.nodeName}' [2025-06-12 23:12:33] DEBUG -- CNTI-node_drain: status_code: 0 [2025-06-12 23:12:33] INFO -- CNTI: Workload Node Name: v131-worker2 [2025-06-12 23:12:33] INFO -- CNTI: Litmus Node Name: v131-worker [2025-06-12 23:12:33] INFO -- CNTI: download_template url, filename: https://raw.githubusercontent.com/litmuschaos/chaos-charts/3.6.0/faults/kubernetes/node-drain/fault.yaml, node_drain_experiment.yaml [2025-06-12 23:12:33] INFO -- CNTI: chaos_manifests_path [2025-06-12 23:12:33] INFO -- CNTI: filepath: /home/xtesting/.cnf-testsuite/tools/chaos-experiments/node_drain_experiment.yaml [2025-06-12 23:12:33] DEBUG -- CNTI-http.client: Performing request [2025-06-12 23:12:33] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file /home/xtesting/.cnf-testsuite/tools/chaos-experiments/node_drain_experiment.yaml [2025-06-12 23:12:34] INFO -- CNTI: download_template url, filename: https://raw.githubusercontent.com/litmuschaos/chaos-charts/2.6.0/charts/generic/node-drain/rbac.yaml, node_drain_rbac.yaml [2025-06-12 23:12:34] INFO -- CNTI: chaos_manifests_path [2025-06-12 23:12:34] INFO -- CNTI: filepath: /home/xtesting/.cnf-testsuite/tools/chaos-experiments/node_drain_rbac.yaml [2025-06-12 23:12:34] DEBUG -- CNTI-http.client: Performing request [2025-06-12 23:12:34] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file /home/xtesting/.cnf-testsuite/tools/chaos-experiments/node_drain_rbac.yaml [2025-06-12 23:12:34] INFO -- CNTI-KubectlClient.Utils.annotate: Annotate deployment/coredns-coredns with litmuschaos.io/chaos="true" [2025-06-12 23:12:34] WARN -- CNTI-KubectlClient.Utils.annotate.cmd: stderr: Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "coredns" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "coredns" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "coredns" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "coredns" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") [2025-06-12 23:12:34] INFO -- CNTI-node_drain: Chaos test name: coredns-coredns-3c44c7ec; Experiment name: node-drain; Label app.kubernetes.io/instance=coredns; namespace: cnf-default [2025-06-12 23:12:34] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file installed_cnf_files/temp_files/node-drain-chaosengine.yml [2025-06-12 23:12:34] INFO -- CNTI: wait_for_test: coredns-coredns-3c44c7ec-node-drain [2025-06-12 23:12:34] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:12:35] INFO -- CNTI: status_code: 0, response: [2025-06-12 23:12:37] DEBUG -- CNTI: Time left: 1798 seconds [2025-06-12 23:12:37] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:12:37] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:12:39] DEBUG -- CNTI: Time left: 1796 seconds [2025-06-12 23:12:39] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:12:39] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:12:41] DEBUG -- CNTI: Time left: 1794 seconds [2025-06-12 23:12:41] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:12:41] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:12:43] DEBUG -- CNTI: Time left: 1792 seconds [2025-06-12 23:12:43] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:12:43] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:12:45] DEBUG -- CNTI: Time left: 1790 seconds [2025-06-12 23:12:45] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:12:45] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:12:47] DEBUG -- CNTI: Time left: 1788 seconds [2025-06-12 23:12:47] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:12:47] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:12:49] DEBUG -- CNTI: Time left: 1786 seconds [2025-06-12 23:12:49] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:12:49] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:12:51] DEBUG -- CNTI: Time left: 1784 seconds [2025-06-12 23:12:51] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:12:51] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:12:53] DEBUG -- CNTI: Time left: 1781 seconds [2025-06-12 23:12:53] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:12:54] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:12:56] DEBUG -- CNTI: Time left: 1779 seconds [2025-06-12 23:12:56] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:12:56] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:12:58] DEBUG -- CNTI: Time left: 1777 seconds [2025-06-12 23:12:58] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:12:58] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:00] DEBUG -- CNTI: Time left: 1775 seconds [2025-06-12 23:13:00] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:00] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:02] DEBUG -- CNTI: Time left: 1773 seconds [2025-06-12 23:13:02] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:02] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:04] DEBUG -- CNTI: Time left: 1771 seconds [2025-06-12 23:13:04] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:04] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:06] DEBUG -- CNTI: Time left: 1769 seconds [2025-06-12 23:13:06] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:06] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:08] DEBUG -- CNTI: Time left: 1767 seconds [2025-06-12 23:13:08] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:08] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:10] DEBUG -- CNTI: Time left: 1765 seconds [2025-06-12 23:13:10] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:11] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:13] DEBUG -- CNTI: Time left: 1762 seconds [2025-06-12 23:13:13] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:13] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:15] DEBUG -- CNTI: Time left: 1760 seconds [2025-06-12 23:13:15] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:15] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:17] DEBUG -- CNTI: Time left: 1758 seconds [2025-06-12 23:13:17] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:17] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:19] DEBUG -- CNTI: Time left: 1756 seconds [2025-06-12 23:13:19] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:19] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:21] DEBUG -- CNTI: Time left: 1754 seconds [2025-06-12 23:13:21] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:21] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:23] DEBUG -- CNTI: Time left: 1752 seconds [2025-06-12 23:13:23] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:23] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:25] DEBUG -- CNTI: Time left: 1750 seconds [2025-06-12 23:13:25] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:25] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:27] DEBUG -- CNTI: Time left: 1748 seconds [2025-06-12 23:13:27] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:27] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:29] DEBUG -- CNTI: Time left: 1746 seconds [2025-06-12 23:13:29] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:30] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:32] DEBUG -- CNTI: Time left: 1743 seconds [2025-06-12 23:13:32] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:32] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:34] DEBUG -- CNTI: Time left: 1741 seconds [2025-06-12 23:13:34] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:34] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:36] DEBUG -- CNTI: Time left: 1739 seconds [2025-06-12 23:13:36] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:36] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:38] DEBUG -- CNTI: Time left: 1737 seconds [2025-06-12 23:13:38] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:38] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:40] DEBUG -- CNTI: Time left: 1735 seconds [2025-06-12 23:13:40] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:40] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:42] DEBUG -- CNTI: Time left: 1733 seconds [2025-06-12 23:13:42] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:42] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:44] DEBUG -- CNTI: Time left: 1731 seconds [2025-06-12 23:13:44] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:44] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:46] DEBUG -- CNTI: Time left: 1729 seconds [2025-06-12 23:13:46] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:46] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:48] DEBUG -- CNTI: Time left: 1727 seconds [2025-06-12 23:13:48] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:49] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:51] DEBUG -- CNTI: Time left: 1724 seconds [2025-06-12 23:13:51] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:51] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:53] DEBUG -- CNTI: Time left: 1722 seconds [2025-06-12 23:13:53] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:53] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:55] DEBUG -- CNTI: Time left: 1720 seconds [2025-06-12 23:13:55] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:55] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:57] DEBUG -- CNTI: Time left: 1718 seconds [2025-06-12 23:13:57] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:57] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:13:59] DEBUG -- CNTI: Time left: 1716 seconds [2025-06-12 23:13:59] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:13:59] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:01] DEBUG -- CNTI: Time left: 1714 seconds [2025-06-12 23:14:01] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:01] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:03] DEBUG -- CNTI: Time left: 1712 seconds [2025-06-12 23:14:03] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:03] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:05] DEBUG -- CNTI: Time left: 1710 seconds [2025-06-12 23:14:05] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:05] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:07] DEBUG -- CNTI: Time left: 1707 seconds [2025-06-12 23:14:07] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:08] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:10] DEBUG -- CNTI: Time left: 1705 seconds [2025-06-12 23:14:10] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:10] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:12] DEBUG -- CNTI: Time left: 1703 seconds [2025-06-12 23:14:12] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:12] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:14] DEBUG -- CNTI: Time left: 1701 seconds [2025-06-12 23:14:14] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:14] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:16] DEBUG -- CNTI: Time left: 1699 seconds [2025-06-12 23:14:16] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:16] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:18] DEBUG -- CNTI: Time left: 1697 seconds [2025-06-12 23:14:18] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:18] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:20] DEBUG -- CNTI: Time left: 1695 seconds [2025-06-12 23:14:20] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:20] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:22] DEBUG -- CNTI: Time left: 1693 seconds [2025-06-12 23:14:22] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:22] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:24] DEBUG -- CNTI: Time left: 1691 seconds [2025-06-12 23:14:24] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:25] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:27] DEBUG -- CNTI: Time left: 1688 seconds [2025-06-12 23:14:27] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:27] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:29] DEBUG -- CNTI: Time left: 1686 seconds [2025-06-12 23:14:29] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:29] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:31] DEBUG -- CNTI: Time left: 1684 seconds [2025-06-12 23:14:31] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:31] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:33] DEBUG -- CNTI: Time left: 1682 seconds [2025-06-12 23:14:33] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:33] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:35] DEBUG -- CNTI: Time left: 1680 seconds [2025-06-12 23:14:35] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:35] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:37] DEBUG -- CNTI: Time left: 1678 seconds [2025-06-12 23:14:37] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:37] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:39] DEBUG -- CNTI: Time left: 1676 seconds [2025-06-12 23:14:39] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:39] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:41] DEBUG -- CNTI: Time left: 1674 seconds [2025-06-12 23:14:41] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:41] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:43] DEBUG -- CNTI: Time left: 1672 seconds [2025-06-12 23:14:43] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:44] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:46] DEBUG -- CNTI: Time left: 1669 seconds [2025-06-12 23:14:46] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:46] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:48] DEBUG -- CNTI: Time left: 1667 seconds [2025-06-12 23:14:48] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:48] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:50] DEBUG -- CNTI: Time left: 1665 seconds [2025-06-12 23:14:50] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:50] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:52] DEBUG -- CNTI: Time left: 1663 seconds [2025-06-12 23:14:52] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:52] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:54] DEBUG -- CNTI: Time left: 1661 seconds [2025-06-12 23:14:54] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:54] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:56] DEBUG -- CNTI: Time left: 1659 seconds [2025-06-12 23:14:56] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:56] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:14:58] DEBUG -- CNTI: Time left: 1657 seconds [2025-06-12 23:14:58] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:14:58] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:15:00] DEBUG -- CNTI: Time left: 1655 seconds [2025-06-12 23:15:00] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:15:00] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:15:02] DEBUG -- CNTI: Time left: 1652 seconds [2025-06-12 23:15:02] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:15:03] INFO -- CNTI: status_code: 0, response: initialized [2025-06-12 23:15:05] DEBUG -- CNTI: Time left: 1650 seconds [2025-06-12 23:15:05] INFO -- CNTI: Getting litmus status info: kubectl get chaosengine.litmuschaos.io coredns-coredns-3c44c7ec -n cnf-default -o 'jsonpath={.status.engineStatus}' [2025-06-12 23:15:05] INFO -- CNTI: status_code: 0, response: completed [2025-06-12 23:15:05] INFO -- CNTI: Getting litmus status info: kubectl get chaosresults.litmuschaos.io coredns-coredns-3c44c7ec-node-drain -n cnf-default -o 'jsonpath={.status.experimentStatus.verdict}' [2025-06-12 23:15:05] INFO -- CNTI: status_code: 0, response: Pass [2025-06-12 23:15:05] INFO -- CNTI: Getting litmus status info: kubectl get chaosresult.litmuschaos.io coredns-coredns-3c44c7ec-node-drain -n cnf-default -o 'jsonpath={.status.experimentStatus.verdict}' [2025-06-12 23:15:05] INFO -- CNTI: status_code: 0, response: Pass [2025-06-12 23:15:05] INFO -- CNTI-KubectlClient.Utils.uncordon: Uncordon node v131-worker2 ✔️ 🏆PASSED: [node_drain] node_drain chaos test passed 🗡️💀♻ State results: 1 of 1 tests passed  Security Tests [2025-06-12 23:15:05] INFO -- CNTI: Uncordoned node v131-worker2 successfully. [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.workload_resource_test: Container result: true [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'node_drain' emoji: 🗡️💀♻ [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'node_drain' tags: ["state", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points: Task: 'node_drain' type: essential [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'node_drain' tags: ["state", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points: Task: 'node_drain' type: essential [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.upsert_task-node_drain: Task start time: 2025-06-12 23:12:31 UTC, end time: 2025-06-12 23:15:05 UTC [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.upsert_task-node_drain: Task: 'node_drain' has status: 'passed' and is awarded: 100 points.Runtime: 00:02:33.825943208 [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["node_drain"] for tags: ["state", "cert"] [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["state", "cert"] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["state", "cert"] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["node_drain"] for tags: ["state", "cert"] [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["state", "cert"] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["no_local_volume_configuration", "elastic_volumes", "database_persistence", "node_drain"] for tag: state [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["state", "cert"] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["essential"] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: [] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: [] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: selinux_options is worth: 100 points [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1900, max tasks passed: 19 for tags: ["essential"] [2025-06-12 23:15:05] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:15:05] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:15:05] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:15:05] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 100} [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:05] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:05] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:05] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:05] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:05] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [privileged_containers] [2025-06-12 23:15:05] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.Task.task_runner.privileged_containers: Starting test [2025-06-12 23:15:05] DEBUG -- CNTI: white_list_container_names [] [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:15:05] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:15:05] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:15:05] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-06-12 23:15:05] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-06-12 23:15:05] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:05] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-06-12 23:15:05] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:05] DEBUG -- CNTI-KubectlClient.Get.privileged_containers: Get privileged containers [2025-06-12 23:15:05] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:06] INFO -- CNTI-KubectlClient.Get.privileged_containers: Found 8 privileged containers [2025-06-12 23:15:06] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-06-12 23:15:06] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns ✔️ 🏆PASSED: [privileged_containers] No privileged containers 🔓🔑 [2025-06-12 23:15:06] DEBUG -- CNTI-CNFManager.workload_resource_test: Container result: true [2025-06-12 23:15:06] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-06-12 23:15:06] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-06-12 23:15:06] DEBUG -- CNTI: violator list: [] [2025-06-12 23:15:06] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'privileged_containers' emoji: 🔓🔑 [2025-06-12 23:15:06] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'privileged_containers' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:06] DEBUG -- CNTI-CNFManager.Points: Task: 'privileged_containers' type: essential [2025-06-12 23:15:06] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-06-12 23:15:06] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'privileged_containers' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:06] DEBUG -- CNTI-CNFManager.Points: Task: 'privileged_containers' type: essential [2025-06-12 23:15:06] DEBUG -- CNTI-CNFManager.Points.upsert_task-privileged_containers: Task start time: 2025-06-12 23:15:05 UTC, end time: 2025-06-12 23:15:06 UTC [2025-06-12 23:15:06] INFO -- CNTI-CNFManager.Points.upsert_task-privileged_containers: Task: 'privileged_containers' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.556169324 [2025-06-12 23:15:06] INFO -- CNTI-Setup.kubescape_framework_download: Downloading Kubescape testing framework [2025-06-12 23:15:06] DEBUG -- CNTI-http.client: Performing request [2025-06-12 23:15:06] DEBUG -- CNTI-http.client: Performing request [2025-06-12 23:15:06] DEBUG -- CNTI-Setup.kubescape_framework_download: Downloaded Kubescape framework json [2025-06-12 23:15:06] INFO -- CNTI-Setup.kubescape_framework_download: Kubescape framework json has been downloaded [2025-06-12 23:15:06] INFO -- CNTI-Setup.install_kubescape: Installing Kubescape tool [2025-06-12 23:15:06] DEBUG -- CNTI-http.client: Performing request [2025-06-12 23:15:07] DEBUG -- CNTI-http.client: Performing request [2025-06-12 23:15:10] DEBUG -- CNTI-Setup.install_kubescape: Downloaded Kubescape binary [2025-06-12 23:15:10] INFO -- CNTI-ShellCmd.run: command: chmod +x /home/xtesting/.cnf-testsuite/tools/kubescape/kubescape [2025-06-12 23:15:10] DEBUG -- CNTI-ShellCmd.run: output: [2025-06-12 23:15:10] INFO -- CNTI-Setup.install_kubescape: Kubescape tool has been installed [2025-06-12 23:15:10] INFO -- CNTI-Setup.kubescape_scan: Perform Kubescape cluster scan [2025-06-12 23:15:10] INFO -- CNTI: scan command: /home/xtesting/.cnf-testsuite/tools/kubescape/kubescape scan framework nsa --use-from /home/xtesting/.cnf-testsuite/tools/kubescape/nsa.json --output kubescape_results.json --format json --format-version=v1 --exclude-namespaces kube-system,kube-public,kube-node-lease,local-path-storage,litmus,cnf-testsuite [2025-06-12 23:15:17] INFO -- CNTI: output: ────────────────────────────────────────────────── Framework scanned: NSA ┌─────────────────┬────┐ │ Controls │ 25 │ │ Passed │ 11 │ │ Failed │ 9 │ │ Action Required │ 5 │ └─────────────────┴────┘ Failed resources by severity: ┌──────────┬────┐ │ Critical │ 0 │ │ High │ 0 │ │ Medium │ 11 │ │ Low │ 1 │ └──────────┴────┘ Run with '--verbose'/'-v' to see control failures for each resource. ┌──────────┬────────────────────────────────────────────────────┬──────────────────┬───────────────┬────────────────────┐ │ Severity │ Control name │ Failed resources │ All Resources │ Compliance score │ ├──────────┼────────────────────────────────────────────────────┼──────────────────┼───────────────┼────────────────────┤ │ Critical │ Disable anonymous access to Kubelet service │ 0 │ 0 │ Action Required ** │ │ Critical │ Enforce Kubelet client TLS authentication │ 0 │ 0 │ Action Required ** │ │ Medium │ Prevent containers from allowing command execution │ 2 │ 19 │ 89% │ │ Medium │ Non-root containers │ 1 │ 1 │ 0% │ │ Medium │ Allow privilege escalation │ 1 │ 1 │ 0% │ │ Medium │ Ingress and Egress blocked │ 1 │ 1 │ 0% │ │ Medium │ Automatic mapping of service account │ 3 │ 4 │ 25% │ │ Medium │ Administrative Roles │ 1 │ 19 │ 95% │ │ Medium │ Cluster internal networking │ 1 │ 2 │ 50% │ │ Medium │ Linux hardening │ 1 │ 1 │ 0% │ │ Medium │ Secret/etcd encryption enabled │ 0 │ 0 │ Action Required * │ │ Medium │ Audit logs enabled │ 0 │ 0 │ Action Required * │ │ Low │ Immutable container filesystem │ 1 │ 1 │ 0% │ │ Low │ PSP enabled │ 0 │ 0 │ Action Required * │ ├──────────┼────────────────────────────────────────────────────┼──────────────────┼───────────────┼────────────────────┤ │ │ Resource Summary │ 6 │ 28 │ 54.37% │ └──────────┴────────────────────────────────────────────────────┴──────────────────┴───────────────┴────────────────────┘ 🚨 * failed to get cloud provider, cluster: kind-v131 🚨 ** This control is scanned exclusively by the Kubescape operator, not the Kubescape CLI. Install the Kubescape operator: https://kubescape.io/docs/install-operator/. [2025-06-12 23:15:17] INFO -- CNTI: stderr: {"level":"info","ts":"2025-06-12T23:15:10Z","msg":"Kubescape scanner initializing..."} {"level":"warn","ts":"2025-06-12T23:15:12Z","msg":"Deprecated format version","run":"--format-version=v2"} {"level":"info","ts":"2025-06-12T23:15:16Z","msg":"Initialized scanner"} {"level":"info","ts":"2025-06-12T23:15:16Z","msg":"Loading policies..."} {"level":"info","ts":"2025-06-12T23:15:16Z","msg":"Loaded policies"} {"level":"info","ts":"2025-06-12T23:15:16Z","msg":"Loading exceptions..."} {"level":"info","ts":"2025-06-12T23:15:16Z","msg":"Loaded exceptions"} {"level":"info","ts":"2025-06-12T23:15:16Z","msg":"Loading account configurations..."} {"level":"info","ts":"2025-06-12T23:15:16Z","msg":"Loaded account configurations"} {"level":"info","ts":"2025-06-12T23:15:16Z","msg":"Accessing Kubernetes objects..."} {"level":"info","ts":"2025-06-12T23:15:16Z","msg":"Accessed Kubernetes objects"} {"level":"info","ts":"2025-06-12T23:15:16Z","msg":"Scanning","Cluster":"kind-v131"} {"level":"info","ts":"2025-06-12T23:15:16Z","msg":"Done scanning","Cluster":"kind-v131"} {"level":"info","ts":"2025-06-12T23:15:16Z","msg":"Done aggregating results"} {"level":"info","ts":"2025-06-12T23:15:17Z","msg":"Scan results saved","filename":"kubescape_results.json"} Overall compliance-score (100- Excellent, 0- All failed): 54 [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:17] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:17] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:17] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:17] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:17] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [non_root_containers] Failed resource: Deployment coredns-coredns in cnf-default namespace Remediation: If your application does not need root privileges, make sure to define runAsNonRoot as true or explicitly set the runAsUser using ID 1000 or higher under the PodSecurityContext or container securityContext. In addition, set an explicit value for runAsGroup using ID 1000 or higher. ✖️ 🏆FAILED: [non_root_containers] Found containers running with root user or user with root group membership 🔓🔑 [2025-06-12 23:15:17] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Task.task_runner.non_root_containers: Starting test [2025-06-12 23:15:17] INFO -- CNTI: kubescape parse [2025-06-12 23:15:17] INFO -- CNTI: kubescape test_by_test_name [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'non_root_containers' emoji: 🔓🔑 [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'non_root_containers' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points: Task: 'non_root_containers' type: essential [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 0 points [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'non_root_containers' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points: Task: 'non_root_containers' type: essential [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points.upsert_task-non_root_containers: Task start time: 2025-06-12 23:15:17 UTC, end time: 2025-06-12 23:15:17 UTC [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Points.upsert_task-non_root_containers: Task: 'non_root_containers' has status: 'failed' and is awarded: 0 points.Runtime: 00:00:00.041986275 [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:17] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:17] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:17] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:17] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:17] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [cpu_limits] ✔️ 🏆PASSED: [cpu_limits] Containers have CPU limits set 🔓🔑 [2025-06-12 23:15:17] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Task.task_runner.cpu_limits: Starting test [2025-06-12 23:15:17] INFO -- CNTI: kubescape parse [2025-06-12 23:15:17] INFO -- CNTI: kubescape test_by_test_name [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'cpu_limits' emoji: 🔓🔑 [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'cpu_limits' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points: Task: 'cpu_limits' type: essential [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'cpu_limits' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points: Task: 'cpu_limits' type: essential [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points.upsert_task-cpu_limits: Task start time: 2025-06-12 23:15:17 UTC, end time: 2025-06-12 23:15:17 UTC [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Points.upsert_task-cpu_limits: Task: 'cpu_limits' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.024339633 [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:17] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:17] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:17] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:17] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:17] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [memory_limits] ✔️ 🏆PASSED: [memory_limits] Containers have memory limits set 🔓🔑 [2025-06-12 23:15:17] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Task.task_runner.memory_limits: Starting test [2025-06-12 23:15:17] INFO -- CNTI: kubescape parse [2025-06-12 23:15:17] INFO -- CNTI: kubescape test_by_test_name [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:17] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'memory_limits' emoji: 🔓🔑 [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'memory_limits' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points: Task: 'memory_limits' type: essential [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'memory_limits' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points: Task: 'memory_limits' type: essential [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Points.upsert_task-memory_limits: Task start time: 2025-06-12 23:15:17 UTC, end time: 2025-06-12 23:15:17 UTC [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Points.upsert_task-memory_limits: Task: 'memory_limits' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.022113805 [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:17] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:17] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:17] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:17] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:17] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [hostpath_mounts] [2025-06-12 23:15:17] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:17] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:17] INFO -- CNTI-CNFManager.Task.task_runner.hostpath_mounts: Starting test [2025-06-12 23:15:17] INFO -- CNTI: scan command: /home/xtesting/.cnf-testsuite/tools/kubescape/kubescape scan control C-0048 --output kubescape_C-0048_results.json --format json --format-version=v1 --exclude-namespaces kube-system,kube-public,kube-node-lease,local-path-storage,litmus,cnf-testsuite ✔️ 🏆PASSED: [hostpath_mounts] Containers do not have hostPath mounts 🔓🔑 [2025-06-12 23:15:22] INFO -- CNTI: output: ────────────────────────────────────────────────── ┌─────────────────┬───┐ │ Controls │ 1 │ │ Passed │ 1 │ │ Failed │ 0 │ │ Action Required │ 0 │ └─────────────────┴───┘ Failed resources by severity: ┌──────────┬───┐ │ Critical │ 0 │ │ High │ 0 │ │ Medium │ 0 │ │ Low │ 0 │ └──────────┴───┘ Run with '--verbose'/'-v' to see control failures for each resource. ┌──────────┬──────────────────┬──────────────────┬───────────────┬──────────────────┐ │ Severity │ Control name │ Failed resources │ All Resources │ Compliance score │ ├──────────┼──────────────────┼──────────────────┼───────────────┼──────────────────┤ │ High │ HostPath mount │ 0 │ 1 │ 100% │ ├──────────┼──────────────────┼──────────────────┼───────────────┼──────────────────┤ │ │ Resource Summary │ 0 │ 1 │ 100.00% │ └──────────┴──────────────────┴──────────────────┴───────────────┴──────────────────┘ [2025-06-12 23:15:22] INFO -- CNTI: stderr: {"level":"info","ts":"2025-06-12T23:15:17Z","msg":"Kubescape scanner initializing..."} {"level":"warn","ts":"2025-06-12T23:15:18Z","msg":"Deprecated format version","run":"--format-version=v2"} {"level":"info","ts":"2025-06-12T23:15:21Z","msg":"Initialized scanner"} {"level":"info","ts":"2025-06-12T23:15:21Z","msg":"Loading policies..."} {"level":"info","ts":"2025-06-12T23:15:21Z","msg":"Loaded policies"} {"level":"info","ts":"2025-06-12T23:15:21Z","msg":"Loading exceptions..."} {"level":"info","ts":"2025-06-12T23:15:21Z","msg":"Loaded exceptions"} {"level":"info","ts":"2025-06-12T23:15:21Z","msg":"Loading account configurations..."} {"level":"info","ts":"2025-06-12T23:15:21Z","msg":"Loaded account configurations"} {"level":"info","ts":"2025-06-12T23:15:21Z","msg":"Accessing Kubernetes objects..."} {"level":"info","ts":"2025-06-12T23:15:21Z","msg":"Accessed Kubernetes objects"} {"level":"info","ts":"2025-06-12T23:15:21Z","msg":"Scanning","Cluster":"kind-v131"} {"level":"info","ts":"2025-06-12T23:15:22Z","msg":"Done scanning","Cluster":"kind-v131"} {"level":"info","ts":"2025-06-12T23:15:22Z","msg":"Done aggregating results"} {"level":"info","ts":"2025-06-12T23:15:22Z","msg":"Scan results saved","filename":"kubescape_C-0048_results.json"} Overall compliance-score (100- Excellent, 0- All failed): 100 {"level":"info","ts":"2025-06-12T23:15:22Z","msg":"Run with '--verbose'/'-v' flag for detailed resources view\n"} {"level":"info","ts":"2025-06-12T23:15:22Z","msg":"Received interrupt signal, exiting..."} [2025-06-12 23:15:22] INFO -- CNTI: kubescape parse [2025-06-12 23:15:22] INFO -- CNTI: kubescape test_by_test_name [2025-06-12 23:15:22] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:15:22] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:22] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:15:22] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'hostpath_mounts' emoji: 🔓🔑 [2025-06-12 23:15:22] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hostpath_mounts' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:22] DEBUG -- CNTI-CNFManager.Points: Task: 'hostpath_mounts' type: essential [2025-06-12 23:15:22] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-06-12 23:15:22] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hostpath_mounts' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:22] DEBUG -- CNTI-CNFManager.Points: Task: 'hostpath_mounts' type: essential [2025-06-12 23:15:22] DEBUG -- CNTI-CNFManager.Points.upsert_task-hostpath_mounts: Task start time: 2025-06-12 23:15:17 UTC, end time: 2025-06-12 23:15:22 UTC [2025-06-12 23:15:22] INFO -- CNTI-CNFManager.Points.upsert_task-hostpath_mounts: Task: 'hostpath_mounts' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:04.835595351 [2025-06-12 23:15:22] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:22] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:22] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:22] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:22] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:22] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:22] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:22] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:22] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [container_sock_mounts] [2025-06-12 23:15:22] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:22] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:22] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:22] INFO -- CNTI-CNFManager.Task.task_runner.container_sock_mounts: Starting test [2025-06-12 23:15:22] DEBUG -- CNTI-http.client: Performing request [2025-06-12 23:15:22] DEBUG -- CNTI-http.client: Performing request [2025-06-12 23:15:23] INFO -- CNTI: TarClient.untar command: tar -xvf /tmp/kyvernorfkia0fq.tar.gz -C /home/xtesting/.cnf-testsuite/tools [2025-06-12 23:15:24] INFO -- CNTI: TarClient.untar output: LICENSE kyverno [2025-06-12 23:15:24] INFO -- CNTI: TarClient.untar stderr: [2025-06-12 23:15:24] INFO -- CNTI: GitClient.clone command: --branch release-1.9 https://github.com/kyverno/policies.git /home/xtesting/.cnf-testsuite/tools/kyverno-policies [2025-06-12 23:15:25] INFO -- CNTI: GitClient.clone output: [2025-06-12 23:15:25] INFO -- CNTI: GitClient.clone stderr: Cloning into '/home/xtesting/.cnf-testsuite/tools/kyverno-policies'... [2025-06-12 23:15:25] INFO -- CNTI-kyverno_policy_path: command: ls /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_cri_sock_mount/disallow_cri_sock_mount.yaml [2025-06-12 23:15:25] INFO -- CNTI-kyverno_policy_path: output: /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_cri_sock_mount/disallow_cri_sock_mount.yaml [2025-06-12 23:15:25] INFO -- CNTI-Kyverno::PolicyAudit.run: command: /home/xtesting/.cnf-testsuite/tools/kyverno apply /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_cri_sock_mount/disallow_cri_sock_mount.yaml --cluster --policy-report ✔️ 🏆PASSED: [container_sock_mounts] Container engine daemon sockets are not mounted as volumes 🔓🔑 [2025-06-12 23:15:27] INFO -- CNTI-Kyverno::PolicyAudit.run: output: Applying 3 policy rules to 28 resources... ---------------------------------------------------------------------- POLICY REPORT: ---------------------------------------------------------------------- apiVersion: wgpolicyk8s.io/v1alpha2 kind: ClusterPolicyReport metadata: name: clusterpolicyreport results: - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: pass rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: pass rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'autogen-validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: pass rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-docker-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: pass rule: validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-containerd-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: pass rule: validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: validation rule 'validate-crio-sock-mount' passed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: pass rule: validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Docker Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-cronjob-validate-docker-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the Containerd Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-cronjob-validate-containerd-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 - message: Use of the CRI-O Unix socket is not allowed. policy: disallow-container-sock-mounts resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-cronjob-validate-crio-sock-mount scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770127 summary: error: 0 fail: 0 pass: 84 skip: 168 warn: 0 [2025-06-12 23:15:27] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'container_sock_mounts' emoji: 🔓🔑 [2025-06-12 23:15:27] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'container_sock_mounts' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:27] DEBUG -- CNTI-CNFManager.Points: Task: 'container_sock_mounts' type: essential [2025-06-12 23:15:27] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-06-12 23:15:27] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'container_sock_mounts' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:27] DEBUG -- CNTI-CNFManager.Points: Task: 'container_sock_mounts' type: essential [2025-06-12 23:15:27] DEBUG -- CNTI-CNFManager.Points.upsert_task-container_sock_mounts: Task start time: 2025-06-12 23:15:22 UTC, end time: 2025-06-12 23:15:27 UTC [2025-06-12 23:15:27] INFO -- CNTI-CNFManager.Points.upsert_task-container_sock_mounts: Task: 'container_sock_mounts' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:05.891098772 [2025-06-12 23:15:27] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:27] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:27] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:27] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:27] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:27] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:27] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:27] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:27] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [selinux_options] [2025-06-12 23:15:28] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:28] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:28] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:28] INFO -- CNTI-CNFManager.Task.task_runner.selinux_options: Starting test [2025-06-12 23:15:28] INFO -- CNTI-kyverno_policy_path: command: ls /home/xtesting/.cnf-testsuite/tools/custom-kyverno-policies/check-selinux-enabled.yml [2025-06-12 23:15:28] INFO -- CNTI-kyverno_policy_path: output: /home/xtesting/.cnf-testsuite/tools/custom-kyverno-policies/check-selinux-enabled.yml [2025-06-12 23:15:28] INFO -- CNTI-Kyverno::PolicyAudit.run: command: /home/xtesting/.cnf-testsuite/tools/kyverno apply /home/xtesting/.cnf-testsuite/tools/custom-kyverno-policies/check-selinux-enabled.yml --cluster --policy-report [2025-06-12 23:15:29] INFO -- CNTI-Kyverno::PolicyAudit.run: output: Applying 1 policy rule to 28 resources... ---------------------------------------------------------------------- POLICY REPORT: ---------------------------------------------------------------------- apiVersion: wgpolicyk8s.io/v1alpha2 kind: ClusterPolicyReport metadata: name: clusterpolicyreport results: - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'autogen-selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: pass rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: validation rule 'selinux-option' passed. policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: pass rule: selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 - message: SELinux is enabled policy: check-selinux-enablement resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-cronjob-selinux-option scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770129 summary: error: 0 fail: 0 pass: 28 skip: 56 warn: 0 [2025-06-12 23:15:29] INFO -- CNTI-kyverno_policy_path: command: ls /home/xtesting/.cnf-testsuite/tools/kyverno-policies/pod-security/baseline/disallow-selinux/disallow-selinux.yaml [2025-06-12 23:15:29] INFO -- CNTI-kyverno_policy_path: output: /home/xtesting/.cnf-testsuite/tools/kyverno-policies/pod-security/baseline/disallow-selinux/disallow-selinux.yaml [2025-06-12 23:15:29] INFO -- CNTI-Kyverno::PolicyAudit.run: command: /home/xtesting/.cnf-testsuite/tools/kyverno apply /home/xtesting/.cnf-testsuite/tools/kyverno-policies/pod-security/baseline/disallow-selinux/disallow-selinux.yaml --cluster --policy-report ⏭️ 🏆N/A: [selinux_options] Pods are not using SELinux 🔓🔑 Security results: 5 of 6 tests passed  Configuration Tests [2025-06-12 23:15:32] INFO -- CNTI-Kyverno::PolicyAudit.run: output: Applying 2 policy rules to 28 resources... ---------------------------------------------------------------------- POLICY REPORT: ---------------------------------------------------------------------- apiVersion: wgpolicyk8s.io/v1alpha2 kind: ClusterPolicyReport metadata: name: clusterpolicyreport results: - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-type' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: pass rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'autogen-selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: pass rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-type' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: pass rule: selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: validation rule 'selinux-user-role' passed. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: pass rule: selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux type is restricted. The fields spec.securityContext.seLinuxOptions.type, spec.containers[*].securityContext.seLinuxOptions.type, , spec.initContainers[*].securityContext.seLinuxOptions, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.type must either be unset or set to one of the allowed values (container_t, container_init_t, or container_kvm_t). policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-cronjob-selinux-type scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 - message: Setting the SELinux user or role is forbidden. The fields spec.securityContext.seLinuxOptions.user, spec.securityContext.seLinuxOptions.role, spec.containers[*].securityContext.seLinuxOptions.user, spec.containers[*].securityContext.seLinuxOptions.role, spec.initContainers[*].securityContext.seLinuxOptions.user, spec.initContainers[*].securityContext.seLinuxOptions.role, spec.ephemeralContainers[*].securityContext.seLinuxOptions.user, and spec.ephemeralContainers[*].securityContext.seLinuxOptions.role must be unset. policy: disallow-selinux resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-cronjob-selinux-user-role scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770132 summary: error: 0 fail: 0 pass: 56 skip: 112 warn: 0 [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'selinux_options' emoji: 🔓🔑 [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'selinux_options' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points: Task: 'selinux_options' type: essential [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: selinux_options is worth: 0 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'selinux_options' tags: ["security", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points: Task: 'selinux_options' type: essential [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.upsert_task-selinux_options: Task start time: 2025-06-12 23:15:28 UTC, end time: 2025-06-12 23:15:32 UTC [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.upsert_task-selinux_options: Task: 'selinux_options' has status: 'na' and is awarded: 0 points.Runtime: 00:00:04.511819036 [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "container_sock_mounts", "selinux_options"] for tags: ["security", "cert"] [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 500, total tasks passed: 5 for tags: ["security", "cert"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 600, max tasks passed: 6 for tags: ["security", "cert"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "container_sock_mounts", "selinux_options"] for tags: ["security", "cert"] [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 500, total tasks passed: 5 for tags: ["security", "cert"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["privilege_escalation", "symlink_file_system", "application_credentials", "host_network", "service_account_mapping", "privileged_containers", "non_root_containers", "host_pid_ipc_privileges", "linux_hardening", "cpu_limits", "memory_limits", "immutable_file_systems", "hostpath_mounts", "ingress_egress_blocked", "insecure_capabilities", "sysctls", "container_sock_mounts", "external_ips", "selinux_options"] for tag: security [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 600, max tasks passed: 6 for tags: ["security", "cert"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 700, total tasks passed: 7 for tags: ["essential"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-06-12 23:15:32] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}]} [2025-06-12 23:15:32] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}]} [2025-06-12 23:15:32] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}]} [2025-06-12 23:15:32] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}], "maximum_points" => 600} [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:32] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:32] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:32] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:32] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:32] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [hostport_not_used] [2025-06-12 23:15:32] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.Task.task_runner.hostport_not_used: Starting test [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:15:32] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:15:32] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:15:32] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-06-12 23:15:32] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-06-12 23:15:32] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:32] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-06-12 23:15:32] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:32] INFO -- CNTI-hostport_not_used: hostport_not_used resource: {kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"} [2025-06-12 23:15:32] INFO -- CNTI-hostport_not_used: resource kind: {kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"} [2025-06-12 23:15:32] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:33] DEBUG -- CNTI-hostport_not_used: resource: {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"annotations" => {"deployment.kubernetes.io/revision" => "1", "litmuschaos.io/chaos" => "true", "meta.helm.sh/release-name" => "coredns", "meta.helm.sh/release-namespace" => "cnf-default"}, "creationTimestamp" => "2025-06-12T23:11:45Z", "generation" => 4, "labels" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/name" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS"}, "name" => "coredns-coredns", "namespace" => "cnf-default", "resourceVersion" => "421107", "uid" => "2efdd50d-06a5-4065-849a-f8fb99a73c02"}, "spec" => {"progressDeadlineSeconds" => 600, "replicas" => 1, "revisionHistoryLimit" => 10, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"}}, "strategy" => {"rollingUpdate" => {"maxSurge" => "25%", "maxUnavailable" => 1}, "type" => "RollingUpdate"}, "template" => {"metadata" => {"annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}, "creationTimestamp" => nil, "labels" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"}}, "spec" => {"containers" => [{"args" => ["-conf", "/etc/coredns/Corefile"], "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "livenessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "name" => "coredns", "ports" => [{"containerPort" => 53, "name" => "udp-53", "protocol" => "UDP"}, {"containerPort" => 53, "name" => "tcp-53", "protocol" => "TCP"}], "readinessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "terminationMessagePath" => "/dev/termination-log", "terminationMessagePolicy" => "File", "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}]}], "dnsPolicy" => "Default", "restartPolicy" => "Always", "schedulerName" => "default-scheduler", "securityContext" => {}, "serviceAccount" => "default", "serviceAccountName" => "default", "terminationGracePeriodSeconds" => 30, "volumes" => [{"configMap" => {"defaultMode" => 420, "items" => [{"key" => "Corefile", "path" => "Corefile"}], "name" => "coredns-coredns"}, "name" => "config-volume"}]}}}, "status" => {"availableReplicas" => 1, "conditions" => [{"lastTransitionTime" => "2025-06-12T23:11:45Z", "lastUpdateTime" => "2025-06-12T23:12:05Z", "message" => "ReplicaSet \"coredns-coredns-844775b496\" has successfully progressed.", "reason" => "NewReplicaSetAvailable", "status" => "True", "type" => "Progressing"}, {"lastTransitionTime" => "2025-06-12T23:12:26Z", "lastUpdateTime" => "2025-06-12T23:12:26Z", "message" => "Deployment has minimum availability.", "reason" => "MinimumReplicasAvailable", "status" => "True", "type" => "Available"}], "observedGeneration" => 4, "readyReplicas" => 1, "replicas" => 1, "updatedReplicas" => 1}} [2025-06-12 23:15:33] DEBUG -- CNTI-hostport_not_used: containers: [{"args" => ["-conf", "/etc/coredns/Corefile"], "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "livenessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "name" => "coredns", "ports" => [{"containerPort" => 53, "name" => "udp-53", "protocol" => "UDP"}, {"containerPort" => 53, "name" => "tcp-53", "protocol" => "TCP"}], "readinessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "terminationMessagePath" => "/dev/termination-log", "terminationMessagePolicy" => "File", "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}]}] [2025-06-12 23:15:33] DEBUG -- CNTI-hostport_not_used: single_port: {"containerPort" => 53, "name" => "udp-53", "protocol" => "UDP"} [2025-06-12 23:15:33] DEBUG -- CNTI-hostport_not_used: DAS hostPort: [2025-06-12 23:15:33] DEBUG -- CNTI-hostport_not_used: single_port: {"containerPort" => 53, "name" => "tcp-53", "protocol" => "TCP"} [2025-06-12 23:15:33] DEBUG -- CNTI-hostport_not_used: DAS hostPort: [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-06-12 23:15:33] INFO -- CNTI-hostport_not_used: hostport_not_used resource: {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"} [2025-06-12 23:15:33] INFO -- CNTI-hostport_not_used: resource kind: {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"} [2025-06-12 23:15:33] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Service/coredns-coredns ✔️ 🏆PASSED: [hostport_not_used] HostPort is not used  [2025-06-12 23:15:33] DEBUG -- CNTI-hostport_not_used: resource: {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"annotations" => {"meta.helm.sh/release-name" => "coredns", "meta.helm.sh/release-namespace" => "cnf-default"}, "creationTimestamp" => "2025-06-12T23:11:45Z", "labels" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/name" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS"}, "name" => "coredns-coredns", "namespace" => "cnf-default", "resourceVersion" => "420706", "uid" => "54307240-ae7c-4a79-a9f4-bdd3b87086a7"}, "spec" => {"clusterIP" => "10.96.19.182", "clusterIPs" => ["10.96.19.182"], "internalTrafficPolicy" => "Cluster", "ipFamilies" => ["IPv4"], "ipFamilyPolicy" => "SingleStack", "ports" => [{"name" => "udp-53", "port" => 53, "protocol" => "UDP", "targetPort" => 53}, {"name" => "tcp-53", "port" => 53, "protocol" => "TCP", "targetPort" => 53}], "selector" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"}, "sessionAffinity" => "None", "type" => "ClusterIP"}, "status" => {"loadBalancer" => {}}} [2025-06-12 23:15:33] DEBUG -- CNTI-hostport_not_used: containers: [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hostport_not_used' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.Points: Task: 'hostport_not_used' type: essential [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hostport_not_used' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.Points: Task: 'hostport_not_used' type: essential [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.Points.upsert_task-hostport_not_used: Task start time: 2025-06-12 23:15:32 UTC, end time: 2025-06-12 23:15:33 UTC [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.Points.upsert_task-hostport_not_used: Task: 'hostport_not_used' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.471887282 [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:33] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:33] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:33] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:33] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:33] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [hardcoded_ip_addresses_in_k8s_runtime_configuration] [2025-06-12 23:15:33] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.Task.task_runner.hardcoded_ip_addresses_in_k8s_runtime_configuration: Starting test [2025-06-12 23:15:33] DEBUG -- CNTI: helm_v3?: BuildInfo{Version:"v3.17.0", GitCommit:"301108edc7ac2a8ba79e4ebf5701b0b6ce6a31e4", GitTreeState:"clean", GoVersion:"go1.23.4" [2025-06-12 23:15:33] DEBUG -- CNTI: Helm Path: helm [2025-06-12 23:15:33] INFO -- CNTI-KubectlClient.Delete.resource: Delete resource namespace/hardcoded-ip-test ✔️ 🏆PASSED: [hardcoded_ip_addresses_in_k8s_runtime_configuration] No hard-coded IP addresses found in the runtime K8s configuration  [2025-06-12 23:15:33] WARN -- CNTI-KubectlClient.Delete.resource.cmd: stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. Error from server (NotFound): namespaces "hardcoded-ip-test" not found [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.Points: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' type: essential [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.Points: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' type: essential [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.Points.upsert_task-hardcoded_ip_addresses_in_k8s_runtime_configuration: Task start time: 2025-06-12 23:15:33 UTC, end time: 2025-06-12 23:15:33 UTC [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.Points.upsert_task-hardcoded_ip_addresses_in_k8s_runtime_configuration: Task: 'hardcoded_ip_addresses_in_k8s_runtime_configuration' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.221160486 [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:33] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:33] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:33] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:33] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:33] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [latest_tag] [2025-06-12 23:15:33] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:33] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:33] INFO -- CNTI-CNFManager.Task.task_runner.latest_tag: Starting test [2025-06-12 23:15:33] INFO -- CNTI-kyverno_policy_path: command: ls /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_latest_tag/disallow_latest_tag.yaml [2025-06-12 23:15:33] INFO -- CNTI-kyverno_policy_path: output: /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_latest_tag/disallow_latest_tag.yaml [2025-06-12 23:15:33] INFO -- CNTI-Kyverno::PolicyAudit.run: command: /home/xtesting/.cnf-testsuite/tools/kyverno apply /home/xtesting/.cnf-testsuite/tools/kyverno-policies/best-practices/disallow_latest_tag/disallow_latest_tag.yaml --cluster --policy-report ✔️ 🏆PASSED: [latest_tag] Container images are not using the latest tag 🏷️ Configuration results: 3 of 3 tests passed  Observability and Diagnostics Tests [2025-06-12 23:15:35] INFO -- CNTI-Kyverno::PolicyAudit.run: output: Applying 2 policy rules to 28 resources... ---------------------------------------------------------------------- POLICY REPORT: ---------------------------------------------------------------------- apiVersion: wgpolicyk8s.io/v1alpha2 kind: ClusterPolicyReport metadata: name: clusterpolicyreport results: - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-hlg4b namespace: kube-system uid: 851c55e0-7046-4fd5-b636-fb38ffa5692e result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: etcd-v131-control-plane namespace: kube-system uid: 712de20a-7c48-49c0-bf36-708c28fcaec5 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-htvqt namespace: kube-system uid: 7ec89f98-9006-4a88-a5cb-baa490ebec45 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-apiserver-v131-control-plane namespace: kube-system uid: d1edeefc-17cb-42e7-8718-bf5eef097e79 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kindnet namespace: kube-system uid: e0bbab1f-c75f-4ddf-81b6-ea06db85dae5 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: kube-proxy namespace: kube-system uid: 230fa49c-dc52-45e4-bda2-2e8aa45d88a8 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-n4mpn namespace: kube-system uid: d43e01df-9434-46e0-8c5f-612f1048a06d result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-controller-manager-v131-control-plane namespace: kube-system uid: 344a611e-cc08-41e1-b0fe-5a6958d80fc6 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns namespace: kube-system uid: c2eac70f-d513-4998-a1db-54e9ea0a004e result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-gj76m namespace: kube-system uid: 74f85062-9b41-4704-abc0-f4224a00b81b result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-p2hcc namespace: kube-system uid: 300b6a06-1003-4238-a281-72a70413b29b result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: create-loop-devs-sjps8 namespace: kube-system uid: 7519c1e3-1db0-4d8e-b6c7-cede82ca1de4 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-6fc9z namespace: kube-system uid: 52c42ac4-f9f2-4126-872c-99105ee224ef result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-h9krb namespace: kube-system uid: 3a56a8ec-3e34-4309-ba90-6245482b6f1f result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-scheduler-v131-control-plane namespace: kube-system uid: 3f927471-e090-4306-8d16-beeadb56c074 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: create-loop-devs namespace: kube-system uid: d7a3c10a-b7d8-4784-906e-1c33ca050c7c result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kindnet-f9g2x namespace: kube-system uid: 1cce1f8e-6728-426d-bfb2-6c7f097c8e7b result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: kube-proxy-cplll namespace: kube-system uid: 1482c471-60d0-43a8-acea-fadbdfcc27dc result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-7c65d6cfc9-n85ww namespace: kube-system uid: 80939e37-82e4-43bc-bf2a-10d970877f1f result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: chaos-operator-ce-586f75ccbc-vwtrz namespace: litmus uid: 594bb97f-a0d3-4ba9-b2ee-f8ab6f38fb35 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: chaos-operator-ce namespace: litmus uid: 58d55ac1-e07c-4de3-8142-37a283a4e1a4 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-m6zbj namespace: cnf-testsuite uid: 8e88829f-fb0d-4be1-b373-5936cbea7d6f result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: DaemonSet name: cluster-tools namespace: cnf-testsuite uid: 76cde099-8c9c-49c3-8306-e09e0fe80397 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: cluster-tools-wkzpj namespace: cnf-testsuite uid: b7099310-268a-4ce2-884c-bba6b23e6bcb result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: coredns-coredns namespace: cnf-default uid: 2efdd50d-06a5-4065-849a-f8fb99a73c02 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: coredns-coredns-844775b496-pkwkj namespace: cnf-default uid: 901ab6c0-a347-470b-9e7c-7803c9ca1d7b result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: pass rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: pass rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: v1 kind: Pod name: local-path-provisioner-5cb96f8fdd-764fw namespace: local-path-storage uid: 05523bae-4efa-4c4b-9a03-c582f83c184b result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-require-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: pass rule: autogen-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: An image tag is required. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: autogen-cronjob-require-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: validation rule 'autogen-validate-image-tag' passed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: pass rule: autogen-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 - message: Using a mutable image tag e.g. 'latest' is not allowed. policy: disallow-latest-tag resources: - apiVersion: apps/v1 kind: Deployment name: local-path-provisioner namespace: local-path-storage uid: 009d86bf-1ac2-4d3c-a5d2-0e703d48e5f1 result: skip rule: autogen-cronjob-validate-image-tag scored: true source: kyverno timestamp: nanos: 0 seconds: 1749770135 summary: error: 0 fail: 0 pass: 56 skip: 112 warn: 0 [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'latest_tag' emoji: 🏷️ [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'latest_tag' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points: Task: 'latest_tag' type: essential [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'latest_tag' tags: ["configuration", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points: Task: 'latest_tag' type: essential [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.upsert_task-latest_tag: Task start time: 2025-06-12 23:15:33 UTC, end time: 2025-06-12 23:15:35 UTC [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.upsert_task-latest_tag: Task: 'latest_tag' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:01.901423349 [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "latest_tag"] for tags: ["configuration", "cert"] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 300, total tasks passed: 3 for tags: ["configuration", "cert"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 300, max tasks passed: 3 for tags: ["configuration", "cert"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "latest_tag"] for tags: ["configuration", "cert"] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 300, total tasks passed: 3 for tags: ["configuration", "cert"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["versioned_tag", "ip_addresses", "operator_installed", "nodeport_not_used", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "secrets_used", "immutable_configmap", "alpha_k8s_apis", "require_labels", "default_namespace", "latest_tag"] for tag: configuration [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 300, max tasks passed: 3 for tags: ["configuration", "cert"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1000, total tasks passed: 10 for tags: ["essential"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-06-12 23:15:35] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:15:35] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 300, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:15:35] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 300, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:15:35] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 300, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 300} [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:35] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:35] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:35] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:35] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:35] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [log_output] [2025-06-12 23:15:35] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Task.task_runner.log_output: Starting test [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:15:35] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-06-12 23:15:35] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-06-12 23:15:35] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:35] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-06-12 23:15:35] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:35] DEBUG -- CNTI-KubectlClient.Utils.logs: Dump logs of Deployment/coredns-coredns ✔️ 🏆PASSED: [log_output] Resources output logs to stdout and stderr 📶☠️ Observability and diagnostics results: 1 of 1 tests passed  Microservice Tests [2025-06-12 23:15:35] INFO -- CNTI-Log lines: [pod/coredns-coredns-844775b496-pkwkj/coredns] .:53 [pod/coredns-coredns-844775b496-pkwkj/coredns] [INFO] plugin/reload: Running configuration MD5 = d8c79061f144bdb41e9378f9aa781f71 [pod/coredns-coredns-844775b496-pkwkj/coredns] CoreDNS-1.7.1 [pod/coredns-coredns-844775b496-pkwkj/coredns] linux/amd64, go1.15.2, aa82ca6 [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.workload_resource_test: Container result: true [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'log_output' emoji: 📶☠️ [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'log_output' tags: ["observability", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points: Task: 'log_output' type: essential [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'log_output' tags: ["observability", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points: Task: 'log_output' type: essential [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.upsert_task-log_output: Task start time: 2025-06-12 23:15:35 UTC, end time: 2025-06-12 23:15:35 UTC [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.upsert_task-log_output: Task: 'log_output' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.375536876 [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["log_output"] for tags: ["observability", "cert"] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["observability", "cert"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["observability", "cert"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["log_output"] for tags: ["observability", "cert"] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 100, total tasks passed: 1 for tags: ["observability", "cert"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["log_output", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: observability [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 100, max tasks passed: 1 for tags: ["observability", "cert"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1100, total tasks passed: 11 for tags: ["essential"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers"] [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-06-12 23:15:35] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-06-12 23:15:35] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 300, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:15:35] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:15:35] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:15:35] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 100} [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-06-12 23:15:35] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:15:35] INFO -- CNTI-Setup.install_cluster_tools: Installing cluster_tools on the cluster [2025-06-12 23:15:35] INFO -- CNTI: ClusterTools install [2025-06-12 23:15:35] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource namespaces [2025-06-12 23:15:36] DEBUG -- CNTI: ClusterTools ensure_namespace_exists namespace_array: [{"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-12T23:11:44Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-default", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-default", "resourceVersion" => "420698", "uid" => "b0ed6f40-f9bf-41d2-a521-f6b2d9db5688"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-12T23:11:03Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-testsuite", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-testsuite", "resourceVersion" => "420438", "uid" => "9b5c345e-7ef3-4138-b73e-f56b4a29c1f7"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "default"}, "name" => "default", "resourceVersion" => "18", "uid" => "6540a096-e272-41d8-a161-386e574f329f"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "kube-node-lease"}, "name" => "kube-node-lease", "resourceVersion" => "25", "uid" => "3bf69b14-e04e-47c2-b401-01ac67e2b525"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "kube-public"}, "name" => "kube-public", "resourceVersion" => "11", "uid" => "bf9dde1e-d213-4b9b-a76e-2331e0268f98"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "kube-system"}, "name" => "kube-system", "resourceVersion" => "4", "uid" => "aca03ac4-602a-479e-9465-c3fc642d9935"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"litmus\"}}\n"}, "creationTimestamp" => "2025-06-12T23:12:30Z", "labels" => {"kubernetes.io/metadata.name" => "litmus", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "litmus", "resourceVersion" => "420873", "uid" => "69fa7734-4389-437e-9c6e-d6792e47983d"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"local-path-storage\"}}\n"}, "creationTimestamp" => "2025-06-10T13:23:51Z", "labels" => {"kubernetes.io/metadata.name" => "local-path-storage"}, "name" => "local-path-storage", "resourceVersion" => "281", "uid" => "56adfc2f-0846-4aa8-b7ec-112037d8ba61"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}] [2025-06-12 23:15:36] INFO -- CNTI-KubectlClient.Apply.file: Apply resources from file cluster_tools.yml [2025-06-12 23:15:36] INFO -- CNTI: ClusterTools wait_for_cluster_tools [2025-06-12 23:15:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource namespaces [2025-06-12 23:15:36] DEBUG -- CNTI: ClusterTools ensure_namespace_exists namespace_array: [{"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-12T23:11:44Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-default", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-default", "resourceVersion" => "420698", "uid" => "b0ed6f40-f9bf-41d2-a521-f6b2d9db5688"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-12T23:11:03Z", "labels" => {"kubernetes.io/metadata.name" => "cnf-testsuite", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "cnf-testsuite", "resourceVersion" => "420438", "uid" => "9b5c345e-7ef3-4138-b73e-f56b4a29c1f7"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "default"}, "name" => "default", "resourceVersion" => "18", "uid" => "6540a096-e272-41d8-a161-386e574f329f"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "kube-node-lease"}, "name" => "kube-node-lease", "resourceVersion" => "25", "uid" => "3bf69b14-e04e-47c2-b401-01ac67e2b525"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "kube-public"}, "name" => "kube-public", "resourceVersion" => "11", "uid" => "bf9dde1e-d213-4b9b-a76e-2331e0268f98"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"creationTimestamp" => "2025-06-10T13:23:46Z", "labels" => {"kubernetes.io/metadata.name" => "kube-system"}, "name" => "kube-system", "resourceVersion" => "4", "uid" => "aca03ac4-602a-479e-9465-c3fc642d9935"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"litmus\"}}\n"}, "creationTimestamp" => "2025-06-12T23:12:30Z", "labels" => {"kubernetes.io/metadata.name" => "litmus", "pod-security.kubernetes.io/enforce" => "privileged"}, "name" => "litmus", "resourceVersion" => "420873", "uid" => "69fa7734-4389-437e-9c6e-d6792e47983d"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}, {"apiVersion" => "v1", "kind" => "Namespace", "metadata" => {"annotations" => {"kubectl.kubernetes.io/last-applied-configuration" => "{\"apiVersion\":\"v1\",\"kind\":\"Namespace\",\"metadata\":{\"annotations\":{},\"name\":\"local-path-storage\"}}\n"}, "creationTimestamp" => "2025-06-10T13:23:51Z", "labels" => {"kubernetes.io/metadata.name" => "local-path-storage"}, "name" => "local-path-storage", "resourceVersion" => "281", "uid" => "56adfc2f-0846-4aa8-b7ec-112037d8ba61"}, "spec" => {"finalizers" => ["kubernetes"]}, "status" => {"phase" => "Active"}}] [2025-06-12 23:15:36] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Waiting for resource Daemonset/cluster-tools to install [2025-06-12 23:15:36] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource Daemonset/cluster-tools is ready [2025-06-12 23:15:36] DEBUG -- CNTI-KubectlClient.Get.replica_count: Get replica count of Daemonset/cluster-tools [2025-06-12 23:15:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Daemonset/cluster-tools [2025-06-12 23:15:36] INFO -- CNTI-KubectlClient.wait.resource_wait_for_install: Daemonset/cluster-tools is ready [2025-06-12 23:15:36] INFO -- CNTI-Setup.install_cluster_tools: cluster_tools has been installed on the cluster [2025-06-12 23:15:36] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:36] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:36] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:36] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:36] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:36] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:36] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:36] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:36] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [specialized_init_system] [2025-06-12 23:15:36] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:36] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:36] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:36] INFO -- CNTI-CNFManager.Task.task_runner.specialized_init_system: Starting test [2025-06-12 23:15:36] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-06-12 23:15:36] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:15:36] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:15:36] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:15:36] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:15:36] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-06-12 23:15:36] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-06-12 23:15:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:36] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-06-12 23:15:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:36] INFO -- CNTI-specialized_init_system: Checking resource Deployment/coredns-coredns in cnf-default [2025-06-12 23:15:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:36] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-06-12 23:15:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:37] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-06-12 23:15:37] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:37] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-06-12 23:15:37] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:37] INFO -- CNTI-specialized_init_system: Pod count for resource Deployment/coredns-coredns in cnf-default: 1 [2025-06-12 23:15:37] INFO -- CNTI-specialized_init_system: Inspecting pod: {"apiVersion" => "v1", "kind" => "Pod", "metadata" => {"annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}, "creationTimestamp" => "2025-06-12T23:12:53Z", "generateName" => "coredns-coredns-844775b496-", "labels" => {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns", "pod-template-hash" => "844775b496"}, "name" => "coredns-coredns-844775b496-pkwkj", "namespace" => "cnf-default", "ownerReferences" => [{"apiVersion" => "apps/v1", "blockOwnerDeletion" => true, "controller" => true, "kind" => "ReplicaSet", "name" => "coredns-coredns-844775b496", "uid" => "f241b74c-4560-463d-b5fb-d3c1c9f2546c"}], "resourceVersion" => "421103", "uid" => "901ab6c0-a347-470b-9e7c-7803c9ca1d7b"}, "spec" => {"containers" => [{"args" => ["-conf", "/etc/coredns/Corefile"], "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "livenessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "name" => "coredns", "ports" => [{"containerPort" => 53, "name" => "udp-53", "protocol" => "UDP"}, {"containerPort" => 53, "name" => "tcp-53", "protocol" => "TCP"}], "readinessProbe" => {"failureThreshold" => 5, "httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "periodSeconds" => 10, "successThreshold" => 1, "timeoutSeconds" => 5}, "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "terminationMessagePath" => "/dev/termination-log", "terminationMessagePolicy" => "File", "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}, {"mountPath" => "/var/run/secrets/kubernetes.io/serviceaccount", "name" => "kube-api-access-6jswd", "readOnly" => true}]}], "dnsPolicy" => "Default", "enableServiceLinks" => true, "nodeName" => "v131-worker", "preemptionPolicy" => "PreemptLowerPriority", "priority" => 0, "restartPolicy" => "Always", "schedulerName" => "default-scheduler", "securityContext" => {}, "serviceAccount" => "default", "serviceAccountName" => "default", "terminationGracePeriodSeconds" => 30, "tolerations" => [{"effect" => "NoExecute", "key" => "node.kubernetes.io/not-ready", "operator" => "Exists", "tolerationSeconds" => 300}, {"effect" => "NoExecute", "key" => "node.kubernetes.io/unreachable", "operator" => "Exists", "tolerationSeconds" => 300}], "volumes" => [{"configMap" => {"defaultMode" => 420, "items" => [{"key" => "Corefile", "path" => "Corefile"}], "name" => "coredns-coredns"}, "name" => "config-volume"}, {"name" => "kube-api-access-6jswd", "projected" => {"defaultMode" => 420, "sources" => [{"serviceAccountToken" => {"expirationSeconds" => 3607, "path" => "token"}}, {"configMap" => {"items" => [{"key" => "ca.crt", "path" => "ca.crt"}], "name" => "kube-root-ca.crt"}}, {"downwardAPI" => {"items" => [{"fieldRef" => {"apiVersion" => "v1", "fieldPath" => "metadata.namespace"}, "path" => "namespace"}]}}]}}]}, "status" => {"conditions" => [{"lastProbeTime" => nil, "lastTransitionTime" => "2025-06-12T23:12:57Z", "status" => "True", "type" => "PodReadyToStartContainers"}, {"lastProbeTime" => nil, "lastTransitionTime" => "2025-06-12T23:12:53Z", "status" => "True", "type" => "Initialized"}, {"lastProbeTime" => nil, "lastTransitionTime" => "2025-06-12T23:13:14Z", "status" => "True", "type" => "Ready"}, {"lastProbeTime" => nil, "lastTransitionTime" => "2025-06-12T23:13:14Z", "status" => "True", "type" => "ContainersReady"}, {"lastProbeTime" => nil, "lastTransitionTime" => "2025-06-12T23:12:53Z", "status" => "True", "type" => "PodScheduled"}], "containerStatuses" => [{"containerID" => "containerd://cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "image" => "docker.io/coredns/coredns:1.7.1", "imageID" => "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "lastState" => {}, "name" => "coredns", "ready" => true, "restartCount" => 0, "started" => true, "state" => {"running" => {"startedAt" => "2025-06-12T23:12:56Z"}}, "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}, {"mountPath" => "/var/run/secrets/kubernetes.io/serviceaccount", "name" => "kube-api-access-6jswd", "readOnly" => true, "recursiveReadOnly" => "Disabled"}]}], "hostIP" => "172.24.0.6", "hostIPs" => [{"ip" => "172.24.0.6"}], "phase" => "Running", "podIP" => "10.244.1.123", "podIPs" => [{"ip" => "10.244.1.123"}], "qosClass" => "Guaranteed", "startTime" => "2025-06-12T23:12:53Z"}} [2025-06-12 23:15:37] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:37] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-06-12 23:15:37] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-844775b496-pkwkj list: v131-worker [2025-06-12 23:15:37] INFO -- CNTI: parse_container_id container_id: containerd://cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:37] INFO -- CNTI: node_pid_by_container_id container_id: cedd3fc9d1b795 [2025-06-12 23:15:37] INFO -- CNTI: parse_container_id container_id: cedd3fc9d1b795 [2025-06-12 23:15:37] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:37] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:37] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:37] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:37] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:37] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:37] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-06-12T23:15:37Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-06-12T23:15:37Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-06-12 23:15:37] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.19.182:53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.19.182:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.19.182:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/0.log\",\n \"metadata\": {\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84\"\n }\n ]\n },\n \"pid\": 460751,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-pod901ab6c0_a347_470b_9e7c_7803c9ca1d7b.slice:cri-containerd:cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/460724/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/460724/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/460724/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-844775b496-pkwkj\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.19.182\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.19.182\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT=udp://10.96.19.182:53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.19.182:53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.19.182\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.19.182:53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_SERVICE_PORT=443\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46\",\n \"snapshotKey\": \"cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-06-12T23:12:55.21416247Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-844775b496-pkwkj_901ab6c0-a347-470b-9e7c-7803c9ca1d7b/coredns/0.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 0,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-06-12T23:12:56.89999059Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-06-12T23:15:37Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-06-12T23:15:37Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-06-12 23:15:37] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.19.182" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.19.182" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.19.182:53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.19.182:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.19.182" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.19.182:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1 }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/0.log", "metadata": { "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84" } ] }, "pid": 460751, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46", "io.kubernetes.cri.sandbox-name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-pod901ab6c0_a347_470b_9e7c_7803c9ca1d7b.slice:cri-containerd:cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/460724/ns/ipc", "type": "ipc" }, { "path": "/proc/460724/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/460724/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-844775b496-pkwkj", "COREDNS_COREDNS_SERVICE_HOST=10.96.19.182", "COREDNS_COREDNS_PORT_53_UDP_PORT=53", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.19.182", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT=udp://10.96.19.182:53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.19.182:53", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.19.182", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_PORT=tcp://10.96.0.1:443", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.19.182:53", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_SERVICE_PORT=443" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46", "snapshotKey": "cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-06-12T23:12:55.21416247Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-844775b496-pkwkj_901ab6c0-a347-470b-9e7c-7803c9ca1d7b/coredns/0.log", "message": "", "metadata": { "attempt": 0, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-06-12T23:12:56.89999059Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-06-12 23:15:37] INFO -- CNTI: node_pid_by_container_id pid: 460751 [2025-06-12 23:15:37] INFO -- CNTI: cmdline_by_pid [2025-06-12 23:15:37] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:37] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:37] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:37] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:37] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:37] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj pod/coredns-coredns-844775b496-pkwkj has container 'coredns' with /coredns as init process ✖️ 🏆FAILED: [specialized_init_system] Containers do not use specialized init systems (ভ_ভ) ރ 🚀 [2025-06-12 23:15:38] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-06-12 23:15:38] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-06-12 23:15:38] INFO -- CNTI-InitSystems.scan: pod/coredns-coredns-844775b496-pkwkj has container 'coredns' with /coredns as init process [2025-06-12 23:15:38] INFO -- CNTI-specialized_init_system: Pod scan result: [InitSystems::InitSystemInfo(@kind="pod", @namespace="cnf-default", @name="coredns-coredns-844775b496-pkwkj", @container="coredns", @init_cmd="/coredns")] [2025-06-12 23:15:38] DEBUG -- CNTI-CNFManager.workload_resource_test: Container result: [2025-06-12 23:15:38] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-06-12 23:15:38] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-06-12 23:15:38] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'specialized_init_system' emoji: 🚀 [2025-06-12 23:15:38] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'specialized_init_system' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:38] DEBUG -- CNTI-CNFManager.Points: Task: 'specialized_init_system' type: essential [2025-06-12 23:15:38] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 0 points [2025-06-12 23:15:38] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'specialized_init_system' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:38] DEBUG -- CNTI-CNFManager.Points: Task: 'specialized_init_system' type: essential [2025-06-12 23:15:38] DEBUG -- CNTI-CNFManager.Points.upsert_task-specialized_init_system: Task start time: 2025-06-12 23:15:36 UTC, end time: 2025-06-12 23:15:38 UTC [2025-06-12 23:15:38] INFO -- CNTI-CNFManager.Points.upsert_task-specialized_init_system: Task: 'specialized_init_system' has status: 'failed' and is awarded: 0 points.Runtime: 00:00:01.475518540 [2025-06-12 23:15:38] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:38] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:38] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:38] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:38] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:38] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:38] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:38] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:38] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [single_process_type] [2025-06-12 23:15:38] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:38] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:38] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:38] INFO -- CNTI-CNFManager.Task.task_runner.single_process_type: Starting test [2025-06-12 23:15:38] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:15:38] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:38] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:15:38] INFO -- CNTI: Constructed resource_named_tuple: {kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"} [2025-06-12 23:15:38] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:38] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-06-12 23:15:38] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:38] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-06-12 23:15:38] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:38] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-06-12 23:15:38] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:38] INFO -- CNTI: pod_name: coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:38] INFO -- CNTI: container_statuses: [{"containerID" => "containerd://cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "image" => "docker.io/coredns/coredns:1.7.1", "imageID" => "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "lastState" => {}, "name" => "coredns", "ready" => true, "restartCount" => 0, "started" => true, "state" => {"running" => {"startedAt" => "2025-06-12T23:12:56Z"}}, "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}, {"mountPath" => "/var/run/secrets/kubernetes.io/serviceaccount", "name" => "kube-api-access-6jswd", "readOnly" => true, "recursiveReadOnly" => "Disabled"}]}] [2025-06-12 23:15:38] INFO -- CNTI: pod_name: coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:38] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:38] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-06-12 23:15:38] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-844775b496-pkwkj list: v131-worker [2025-06-12 23:15:38] INFO -- CNTI: nodes_by_resource done [2025-06-12 23:15:38] INFO -- CNTI: before ready containerStatuses container_id cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:38] INFO -- CNTI: containerStatuses container_id cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:38] INFO -- CNTI: node_pid_by_container_id container_id: cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:38] INFO -- CNTI: parse_container_id container_id: cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:38] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:38] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:38] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:38] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:38] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:38] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:39] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-06-12T23:15:38Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-06-12T23:15:38Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-06-12 23:15:39] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.19.182:53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.19.182:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.19.182:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/0.log\",\n \"metadata\": {\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84\"\n }\n ]\n },\n \"pid\": 460751,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-pod901ab6c0_a347_470b_9e7c_7803c9ca1d7b.slice:cri-containerd:cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/460724/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/460724/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/460724/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-844775b496-pkwkj\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.19.182\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.19.182\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT=udp://10.96.19.182:53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.19.182:53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.19.182\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.19.182:53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_SERVICE_PORT=443\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46\",\n \"snapshotKey\": \"cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-06-12T23:12:55.21416247Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-844775b496-pkwkj_901ab6c0-a347-470b-9e7c-7803c9ca1d7b/coredns/0.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 0,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-06-12T23:12:56.89999059Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-06-12T23:15:38Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-06-12T23:15:38Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-06-12 23:15:39] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.19.182" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.19.182" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.19.182:53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.19.182:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.19.182" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.19.182:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1 }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/0.log", "metadata": { "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84" } ] }, "pid": 460751, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46", "io.kubernetes.cri.sandbox-name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-pod901ab6c0_a347_470b_9e7c_7803c9ca1d7b.slice:cri-containerd:cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/460724/ns/ipc", "type": "ipc" }, { "path": "/proc/460724/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/460724/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-844775b496-pkwkj", "COREDNS_COREDNS_SERVICE_HOST=10.96.19.182", "COREDNS_COREDNS_PORT_53_UDP_PORT=53", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.19.182", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT=udp://10.96.19.182:53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.19.182:53", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.19.182", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_PORT=tcp://10.96.0.1:443", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.19.182:53", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_SERVICE_PORT=443" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46", "snapshotKey": "cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-06-12T23:12:55.21416247Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-844775b496-pkwkj_901ab6c0-a347-470b-9e7c-7803c9ca1d7b/coredns/0.log", "message": "", "metadata": { "attempt": 0, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-06-12T23:12:56.89999059Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-06-12 23:15:39] INFO -- CNTI: node_pid_by_container_id pid: 460751 [2025-06-12 23:15:39] INFO -- CNTI: node pid (should never be pid 1): 460751 [2025-06-12 23:15:39] INFO -- CNTI: node name : v131-worker [2025-06-12 23:15:39] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:39] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:39] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:39] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:39] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:39] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:39] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "460751\n", error: ""} [2025-06-12 23:15:39] INFO -- CNTI: parsed pids: ["460751"] [2025-06-12 23:15:39] INFO -- CNTI: all_statuses_by_pids [2025-06-12 23:15:39] INFO -- CNTI: all_statuses_by_pids pid: 460751 [2025-06-12 23:15:39] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:39] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:39] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:39] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:39] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:39] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:39] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460751\nNgid:\t0\nPid:\t460751\nPPid:\t460699\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t460751\t1\nNSpid:\t460751\t1\nNSpgid:\t460751\t1\nNSsid:\t460751\t1\nVmPeak:\t 749004 kB\nVmSize:\t 749004 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 40772 kB\nVmRSS:\t 40772 kB\nRssAnon:\t 11292 kB\nRssFile:\t 29480 kB\nRssShmem:\t 0 kB\nVmData:\t 109192 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 204 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t3/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t664\nnonvoluntary_ctxt_switches:\t17\n", error: ""} [2025-06-12 23:15:39] DEBUG -- CNTI: proc process_statuses_by_node: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460751\nNgid:\t0\nPid:\t460751\nPPid:\t460699\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t460751\t1\nNSpid:\t460751\t1\nNSpgid:\t460751\t1\nNSsid:\t460751\t1\nVmPeak:\t 749004 kB\nVmSize:\t 749004 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 40772 kB\nVmRSS:\t 40772 kB\nRssAnon:\t 11292 kB\nRssFile:\t 29480 kB\nRssShmem:\t 0 kB\nVmData:\t 109192 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 204 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t3/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t664\nnonvoluntary_ctxt_switches:\t17\n"] [2025-06-12 23:15:39] INFO -- CNTI-proctree_by_pid: proctree_by_pid potential_parent_pid: 460751 [2025-06-12 23:15:39] DEBUG -- CNTI-proctree_by_pid: proc_statuses: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460751\nNgid:\t0\nPid:\t460751\nPPid:\t460699\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t460751\t1\nNSpid:\t460751\t1\nNSpgid:\t460751\t1\nNSsid:\t460751\t1\nVmPeak:\t 749004 kB\nVmSize:\t 749004 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 40772 kB\nVmRSS:\t 40772 kB\nRssAnon:\t 11292 kB\nRssFile:\t 29480 kB\nRssShmem:\t 0 kB\nVmData:\t 109192 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 204 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t3/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t664\nnonvoluntary_ctxt_switches:\t17\n"] [2025-06-12 23:15:39] DEBUG -- CNTI: parse_status status_output: Name: coredns Umask: 0022 State: S (sleeping) Tgid: 460751 Ngid: 0 Pid: 460751 PPid: 460699 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 460751 1 NSpid: 460751 1 NSpgid: 460751 1 NSsid: 460751 1 VmPeak: 749004 kB VmSize: 749004 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 40772 kB VmRSS: 40772 kB RssAnon: 11292 kB RssFile: 29480 kB RssShmem: 0 kB VmData: 109192 kB VmStk: 132 kB VmExe: 22032 kB VmLib: 8 kB VmPTE: 204 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 23 SigQ: 3/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffe7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 664 nonvoluntary_ctxt_switches: 17 [2025-06-12 23:15:39] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "460751", "Ngid" => "0", "Pid" => "460751", "PPid" => "460699", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "460751\t1", "NSpid" => "460751\t1", "NSpgid" => "460751\t1", "NSsid" => "460751\t1", "VmPeak" => "749004 kB", "VmSize" => "749004 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "40772 kB", "VmRSS" => "40772 kB", "RssAnon" => "11292 kB", "RssFile" => "29480 kB", "RssShmem" => "0 kB", "VmData" => "109192 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "204 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "23", "SigQ" => "3/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "664", "nonvoluntary_ctxt_switches" => "17"} [2025-06-12 23:15:39] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:15:39] INFO -- CNTI: cmdline_by_pid [2025-06-12 23:15:39] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:39] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:39] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:39] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:39] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:39] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj ✔️ 🏆PASSED: [single_process_type] Only one process type used ⚖👀 [2025-06-12 23:15:40] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-06-12 23:15:40] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-06-12 23:15:40] DEBUG -- CNTI-proctree_by_pid: current_pid == potential_parent_pid [2025-06-12 23:15:40] DEBUG -- CNTI-proctree_by_pid: proctree: [{"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "460751", "Ngid" => "0", "Pid" => "460751", "PPid" => "460699", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "460751\t1", "NSpid" => "460751\t1", "NSpgid" => "460751\t1", "NSsid" => "460751\t1", "VmPeak" => "749004 kB", "VmSize" => "749004 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "40772 kB", "VmRSS" => "40772 kB", "RssAnon" => "11292 kB", "RssFile" => "29480 kB", "RssShmem" => "0 kB", "VmData" => "109192 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "204 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "23", "SigQ" => "3/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "664", "nonvoluntary_ctxt_switches" => "17", "cmdline" => "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000"}] [2025-06-12 23:15:40] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:15:40] INFO -- CNTI-single_process_type: status name: coredns [2025-06-12 23:15:40] INFO -- CNTI-single_process_type: previous status name: initial_name [2025-06-12 23:15:40] INFO -- CNTI: container_status_result.all?(true): false [2025-06-12 23:15:40] INFO -- CNTI: pod_resp.all?(true): false [2025-06-12 23:15:40] INFO -- CNTI: Constructed resource_named_tuple: {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"} [2025-06-12 23:15:40] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'single_process_type' emoji: ⚖👀 [2025-06-12 23:15:40] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'single_process_type' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:40] DEBUG -- CNTI-CNFManager.Points: Task: 'single_process_type' type: essential [2025-06-12 23:15:40] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-06-12 23:15:40] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'single_process_type' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:15:40] DEBUG -- CNTI-CNFManager.Points: Task: 'single_process_type' type: essential [2025-06-12 23:15:40] DEBUG -- CNTI-CNFManager.Points.upsert_task-single_process_type: Task start time: 2025-06-12 23:15:38 UTC, end time: 2025-06-12 23:15:40 UTC [2025-06-12 23:15:40] INFO -- CNTI-CNFManager.Points.upsert_task-single_process_type: Task: 'single_process_type' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:02.097230214 [2025-06-12 23:15:40] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:40] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:15:40] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:40] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:40] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:15:40] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:15:40] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:15:40] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:15:40] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [zombie_handled] [2025-06-12 23:15:40] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:15:40] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:15:40] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:15:40] INFO -- CNTI-CNFManager.Task.task_runner.zombie_handled: Starting test [2025-06-12 23:15:40] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-06-12 23:15:40] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:15:40] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:15:40] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:15:40] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:15:40] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-06-12 23:15:40] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:40] INFO -- CNTI: pod_name: coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:40] INFO -- CNTI: container_statuses: [{"containerID" => "containerd://cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "image" => "docker.io/coredns/coredns:1.7.1", "imageID" => "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "lastState" => {}, "name" => "coredns", "ready" => true, "restartCount" => 0, "started" => true, "state" => {"running" => {"startedAt" => "2025-06-12T23:12:56Z"}}, "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}, {"mountPath" => "/var/run/secrets/kubernetes.io/serviceaccount", "name" => "kube-api-access-6jswd", "readOnly" => true, "recursiveReadOnly" => "Disabled"}]}] [2025-06-12 23:15:40] INFO -- CNTI: pod_name: coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-06-12 23:15:40] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-844775b496-pkwkj list: v131-worker [2025-06-12 23:15:40] INFO -- CNTI: nodes_by_resource done [2025-06-12 23:15:40] INFO -- CNTI: before ready containerStatuses container_id cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:40] INFO -- CNTI: containerStatuses container_id cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:40] INFO -- CNTI: node_pid_by_container_id container_id: cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:40] INFO -- CNTI: parse_container_id container_id: cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:40] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:40] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:41] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:41] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:41] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:41] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-06-12T23:15:41Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-06-12T23:15:41Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-06-12 23:15:41] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.19.182:53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.19.182:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.19.182:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/0.log\",\n \"metadata\": {\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84\"\n }\n ]\n },\n \"pid\": 460751,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-pod901ab6c0_a347_470b_9e7c_7803c9ca1d7b.slice:cri-containerd:cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/460724/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/460724/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/460724/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-844775b496-pkwkj\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.19.182\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.19.182\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT=udp://10.96.19.182:53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.19.182:53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.19.182\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.19.182:53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_SERVICE_PORT=443\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46\",\n \"snapshotKey\": \"cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-06-12T23:12:55.21416247Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-844775b496-pkwkj_901ab6c0-a347-470b-9e7c-7803c9ca1d7b/coredns/0.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 0,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-06-12T23:12:56.89999059Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-06-12T23:15:41Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-06-12T23:15:41Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-06-12 23:15:41] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.19.182" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.19.182" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.19.182:53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.19.182:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.19.182" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.19.182:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1 }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/0.log", "metadata": { "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84" } ] }, "pid": 460751, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46", "io.kubernetes.cri.sandbox-name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-pod901ab6c0_a347_470b_9e7c_7803c9ca1d7b.slice:cri-containerd:cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/460724/ns/ipc", "type": "ipc" }, { "path": "/proc/460724/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/460724/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-844775b496-pkwkj", "COREDNS_COREDNS_SERVICE_HOST=10.96.19.182", "COREDNS_COREDNS_PORT_53_UDP_PORT=53", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.19.182", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT=udp://10.96.19.182:53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.19.182:53", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.19.182", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_PORT=tcp://10.96.0.1:443", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.19.182:53", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_SERVICE_PORT=443" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46", "snapshotKey": "cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-06-12T23:12:55.21416247Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-844775b496-pkwkj_901ab6c0-a347-470b-9e7c-7803c9ca1d7b/coredns/0.log", "message": "", "metadata": { "attempt": 0, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-06-12T23:12:56.89999059Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-06-12 23:15:41] INFO -- CNTI: node_pid_by_container_id pid: 460751 [2025-06-12 23:15:41] INFO -- CNTI: node pid (should never be pid 1): 460751 [2025-06-12 23:15:41] INFO -- CNTI: node name : v131-worker [2025-06-12 23:15:41] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:41] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:41] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:41] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:41] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:41] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:41] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-06-12 23:15:41] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:41] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:41] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:41] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:41] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:41] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:42] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-06-12 23:15:42] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:42] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:42] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:42] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:42] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:42] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:44] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Sleeping...\n", error: ""} [2025-06-12 23:15:44] INFO -- CNTI: container_status_result.all?(true): false [2025-06-12 23:15:44] INFO -- CNTI: pod_resp.all?(true): false [2025-06-12 23:15:44] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-06-12 23:15:44] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: false [2025-06-12 23:15:54] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-06-12 23:15:54] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:15:54] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:15:54] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:15:54] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:15:54] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-06-12 23:15:54] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-06-12 23:15:54] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:54] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-06-12 23:15:54] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:54] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:55] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-06-12 23:15:55] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:55] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-06-12 23:15:55] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:15:55] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-06-12 23:15:55] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:55] INFO -- CNTI: pod_name: coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:55] INFO -- CNTI: container_statuses: [{"containerID" => "containerd://cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "image" => "docker.io/coredns/coredns:1.7.1", "imageID" => "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "lastState" => {}, "name" => "coredns", "ready" => true, "restartCount" => 0, "started" => true, "state" => {"running" => {"startedAt" => "2025-06-12T23:12:56Z"}}, "volumeMounts" => [{"mountPath" => "/etc/coredns", "name" => "config-volume"}, {"mountPath" => "/var/run/secrets/kubernetes.io/serviceaccount", "name" => "kube-api-access-6jswd", "readOnly" => true, "recursiveReadOnly" => "Disabled"}]}] [2025-06-12 23:15:55] INFO -- CNTI: pod_name: coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:55] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-844775b496-pkwkj [2025-06-12 23:15:55] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-06-12 23:15:55] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-844775b496-pkwkj list: v131-worker [2025-06-12 23:15:55] INFO -- CNTI: nodes_by_resource done [2025-06-12 23:15:55] INFO -- CNTI: before ready containerStatuses container_id cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:55] INFO -- CNTI: containerStatuses container_id cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:55] INFO -- CNTI: node_pid_by_container_id container_id: cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:55] INFO -- CNTI: parse_container_id container_id: cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:55] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:55] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:55] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:55] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:55] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:55] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:55] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-06-12T23:15:55Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-06-12T23:15:55Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-06-12 23:15:55] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.19.182:53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.19.182:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.19.182:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/0.log\",\n \"metadata\": {\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84\"\n }\n ]\n },\n \"pid\": 460751,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-pod901ab6c0_a347_470b_9e7c_7803c9ca1d7b.slice:cri-containerd:cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/460724/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/460724/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/460724/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-844775b496-pkwkj\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.19.182\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.19.182\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT=udp://10.96.19.182:53\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.19.182:53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.19.182\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.19.182:53\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_SERVICE_PORT=443\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46\",\n \"snapshotKey\": \"cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"0\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-06-12T23:12:55.21416247Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-844775b496-pkwkj_901ab6c0-a347-470b-9e7c-7803c9ca1d7b/coredns/0.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 0,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-06-12T23:12:56.89999059Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-06-12T23:15:55Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-06-12T23:15:55Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-06-12 23:15:55] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.19.182" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.19.182" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.19.182:53" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.19.182:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.19.182" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.19.182:53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1 }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/0.log", "metadata": { "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84" } ] }, "pid": 460751, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46", "io.kubernetes.cri.sandbox-name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-pod901ab6c0_a347_470b_9e7c_7803c9ca1d7b.slice:cri-containerd:cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/460724/ns/ipc", "type": "ipc" }, { "path": "/proc/460724/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/460724/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-844775b496-pkwkj", "COREDNS_COREDNS_SERVICE_HOST=10.96.19.182", "COREDNS_COREDNS_PORT_53_UDP_PORT=53", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.19.182", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT=udp://10.96.19.182:53", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.19.182:53", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.19.182", "KUBERNETES_SERVICE_PORT_HTTPS=443", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_PORT=tcp://10.96.0.1:443", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.19.182:53", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_SERVICE_PORT=443" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46", "snapshotKey": "cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "0", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-06-12T23:12:55.21416247Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-844775b496-pkwkj_901ab6c0-a347-470b-9e7c-7803c9ca1d7b/coredns/0.log", "message": "", "metadata": { "attempt": 0, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/2cb88a84", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-06-12T23:12:56.89999059Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-06-12 23:15:55] INFO -- CNTI: node_pid_by_container_id pid: 460751 [2025-06-12 23:15:55] INFO -- CNTI: node pid (should never be pid 1): 460751 [2025-06-12 23:15:55] INFO -- CNTI: node name : v131-worker [2025-06-12 23:15:55] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:55] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:55] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:55] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:55] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:55] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:56] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "460751\n461427\n", error: ""} [2025-06-12 23:15:56] INFO -- CNTI: parsed pids: ["460751", "461427"] [2025-06-12 23:15:56] INFO -- CNTI: all_statuses_by_pids [2025-06-12 23:15:56] INFO -- CNTI: all_statuses_by_pids pid: 460751 [2025-06-12 23:15:56] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:56] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:56] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:56] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:56] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:56] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:56] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460751\nNgid:\t0\nPid:\t460751\nPPid:\t460699\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t460751\t1\nNSpid:\t460751\t1\nNSpgid:\t460751\t1\nNSsid:\t460751\t1\nVmPeak:\t 749004 kB\nVmSize:\t 749004 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 40772 kB\nVmRSS:\t 40772 kB\nRssAnon:\t 11292 kB\nRssFile:\t 29480 kB\nRssShmem:\t 0 kB\nVmData:\t 109192 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 204 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t665\nnonvoluntary_ctxt_switches:\t17\n", error: ""} [2025-06-12 23:15:56] INFO -- CNTI: all_statuses_by_pids pid: 461427 [2025-06-12 23:15:56] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:56] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:56] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:56] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:56] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:56] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:56] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsleep\nState:\tZ (zombie)\nTgid:\t461427\nNgid:\t0\nPid:\t461427\nPPid:\t460751\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t0\nGroups:\t0 \nNStgid:\t461427\t42\nNSpid:\t461427\t42\nNSpgid:\t461421\t36\nNSsid:\t461421\t36\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000001000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-06-12 23:15:56] DEBUG -- CNTI: proc process_statuses_by_node: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460751\nNgid:\t0\nPid:\t460751\nPPid:\t460699\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t460751\t1\nNSpid:\t460751\t1\nNSpgid:\t460751\t1\nNSsid:\t460751\t1\nVmPeak:\t 749004 kB\nVmSize:\t 749004 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 40772 kB\nVmRSS:\t 40772 kB\nRssAnon:\t 11292 kB\nRssFile:\t 29480 kB\nRssShmem:\t 0 kB\nVmData:\t 109192 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 204 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t665\nnonvoluntary_ctxt_switches:\t17\n", "Name:\tsleep\nState:\tZ (zombie)\nTgid:\t461427\nNgid:\t0\nPid:\t461427\nPPid:\t460751\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t0\nGroups:\t0 \nNStgid:\t461427\t42\nNSpid:\t461427\t42\nNSpgid:\t461421\t36\nNSsid:\t461421\t36\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000001000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2\nnonvoluntary_ctxt_switches:\t0\n"] [2025-06-12 23:15:56] INFO -- CNTI-proctree_by_pid: proctree_by_pid potential_parent_pid: 460751 [2025-06-12 23:15:56] DEBUG -- CNTI-proctree_by_pid: proc_statuses: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460751\nNgid:\t0\nPid:\t460751\nPPid:\t460699\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t460751\t1\nNSpid:\t460751\t1\nNSpgid:\t460751\t1\nNSsid:\t460751\t1\nVmPeak:\t 749004 kB\nVmSize:\t 749004 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 40772 kB\nVmRSS:\t 40772 kB\nRssAnon:\t 11292 kB\nRssFile:\t 29480 kB\nRssShmem:\t 0 kB\nVmData:\t 109192 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 204 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t665\nnonvoluntary_ctxt_switches:\t17\n", "Name:\tsleep\nState:\tZ (zombie)\nTgid:\t461427\nNgid:\t0\nPid:\t461427\nPPid:\t460751\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t0\nGroups:\t0 \nNStgid:\t461427\t42\nNSpid:\t461427\t42\nNSpgid:\t461421\t36\nNSsid:\t461421\t36\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000001000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2\nnonvoluntary_ctxt_switches:\t0\n"] [2025-06-12 23:15:56] DEBUG -- CNTI: parse_status status_output: Name: coredns Umask: 0022 State: S (sleeping) Tgid: 460751 Ngid: 0 Pid: 460751 PPid: 460699 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 460751 1 NSpid: 460751 1 NSpgid: 460751 1 NSsid: 460751 1 VmPeak: 749004 kB VmSize: 749004 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 40772 kB VmRSS: 40772 kB RssAnon: 11292 kB RssFile: 29480 kB RssShmem: 0 kB VmData: 109192 kB VmStk: 132 kB VmExe: 22032 kB VmLib: 8 kB VmPTE: 204 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 23 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffe7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 665 nonvoluntary_ctxt_switches: 17 [2025-06-12 23:15:56] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "460751", "Ngid" => "0", "Pid" => "460751", "PPid" => "460699", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "460751\t1", "NSpid" => "460751\t1", "NSpgid" => "460751\t1", "NSsid" => "460751\t1", "VmPeak" => "749004 kB", "VmSize" => "749004 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "40772 kB", "VmRSS" => "40772 kB", "RssAnon" => "11292 kB", "RssFile" => "29480 kB", "RssShmem" => "0 kB", "VmData" => "109192 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "204 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "23", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "665", "nonvoluntary_ctxt_switches" => "17"} [2025-06-12 23:15:56] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:15:56] INFO -- CNTI: cmdline_by_pid [2025-06-12 23:15:56] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:56] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:56] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:57] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:57] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:57] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:57] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-06-12 23:15:57] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-06-12 23:15:57] DEBUG -- CNTI-proctree_by_pid: current_pid == potential_parent_pid [2025-06-12 23:15:57] DEBUG -- CNTI: parse_status status_output: Name: sleep State: Z (zombie) Tgid: 461427 Ngid: 0 Pid: 461427 PPid: 460751 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 0 Groups: 0 NStgid: 461427 42 NSpid: 461427 42 NSpgid: 461421 36 NSsid: 461421 36 Threads: 1 SigQ: 4/256660 SigPnd: 0000000000001000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 2 nonvoluntary_ctxt_switches: 0 [2025-06-12 23:15:57] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "461427", "Ngid" => "0", "Pid" => "461427", "PPid" => "460751", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "461427\t42", "NSpid" => "461427\t42", "NSpgid" => "461421\t36", "NSsid" => "461421\t36", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0"} [2025-06-12 23:15:57] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:15:57] DEBUG -- CNTI-proctree_by_pid: proctree_by_pid ppid == pid && ppid != current_pid [2025-06-12 23:15:57] INFO -- CNTI: cmdline_by_pid [2025-06-12 23:15:57] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:57] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:57] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:57] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:57] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:57] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:57] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-06-12 23:15:57] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "", error: ""} [2025-06-12 23:15:57] DEBUG -- CNTI-proctree_by_pid: Matched descendent cmdline [2025-06-12 23:15:57] INFO -- CNTI-proctree_by_pid: proctree_by_pid potential_parent_pid: 461427 [2025-06-12 23:15:57] DEBUG -- CNTI-proctree_by_pid: proc_statuses: ["Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460751\nNgid:\t0\nPid:\t460751\nPPid:\t460699\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t460751\t1\nNSpid:\t460751\t1\nNSpgid:\t460751\t1\nNSsid:\t460751\t1\nVmPeak:\t 749004 kB\nVmSize:\t 749004 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 40772 kB\nVmRSS:\t 40772 kB\nRssAnon:\t 11292 kB\nRssFile:\t 29480 kB\nRssShmem:\t 0 kB\nVmData:\t 109192 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 204 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t23\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t665\nnonvoluntary_ctxt_switches:\t17\n", "Name:\tsleep\nState:\tZ (zombie)\nTgid:\t461427\nNgid:\t0\nPid:\t461427\nPPid:\t460751\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t0\nGroups:\t0 \nNStgid:\t461427\t42\nNSpid:\t461427\t42\nNSpgid:\t461421\t36\nNSsid:\t461421\t36\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000001000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t2\nnonvoluntary_ctxt_switches:\t0\n"] [2025-06-12 23:15:57] DEBUG -- CNTI: parse_status status_output: Name: coredns Umask: 0022 State: S (sleeping) Tgid: 460751 Ngid: 0 Pid: 460751 PPid: 460699 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 460751 1 NSpid: 460751 1 NSpgid: 460751 1 NSsid: 460751 1 VmPeak: 749004 kB VmSize: 749004 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 40772 kB VmRSS: 40772 kB RssAnon: 11292 kB RssFile: 29480 kB RssShmem: 0 kB VmData: 109192 kB VmStk: 132 kB VmExe: 22032 kB VmLib: 8 kB VmPTE: 204 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 23 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffe7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 665 nonvoluntary_ctxt_switches: 17 [2025-06-12 23:15:57] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "460751", "Ngid" => "0", "Pid" => "460751", "PPid" => "460699", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "460751\t1", "NSpid" => "460751\t1", "NSpgid" => "460751\t1", "NSsid" => "460751\t1", "VmPeak" => "749004 kB", "VmSize" => "749004 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "40772 kB", "VmRSS" => "40772 kB", "RssAnon" => "11292 kB", "RssFile" => "29480 kB", "RssShmem" => "0 kB", "VmData" => "109192 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "204 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "23", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "665", "nonvoluntary_ctxt_switches" => "17"} [2025-06-12 23:15:57] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:15:57] DEBUG -- CNTI: parse_status status_output: Name: sleep State: Z (zombie) Tgid: 461427 Ngid: 0 Pid: 461427 PPid: 460751 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 0 Groups: 0 NStgid: 461427 42 NSpid: 461427 42 NSpgid: 461421 36 NSsid: 461421 36 Threads: 1 SigQ: 4/256660 SigPnd: 0000000000001000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 2 nonvoluntary_ctxt_switches: 0 [2025-06-12 23:15:57] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "461427", "Ngid" => "0", "Pid" => "461427", "PPid" => "460751", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "461427\t42", "NSpid" => "461427\t42", "NSpgid" => "461421\t36", "NSsid" => "461421\t36", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0"} [2025-06-12 23:15:57] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:15:57] INFO -- CNTI: cmdline_by_pid [2025-06-12 23:15:57] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:57] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:57] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:57] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:57] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:57] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj Process sleep in container cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 of pod coredns-coredns-844775b496-pkwkj has a state of Z (zombie) [2025-06-12 23:15:58] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-06-12 23:15:58] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "", error: ""} [2025-06-12 23:15:58] DEBUG -- CNTI-proctree_by_pid: current_pid == potential_parent_pid [2025-06-12 23:15:58] DEBUG -- CNTI-proctree_by_pid: proctree: [{"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "461427", "Ngid" => "0", "Pid" => "461427", "PPid" => "460751", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "461427\t42", "NSpid" => "461427\t42", "NSpgid" => "461421\t36", "NSsid" => "461421\t36", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0", "cmdline" => ""}] [2025-06-12 23:15:58] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:15:58] DEBUG -- CNTI-proctree_by_pid: proctree: [{"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "460751", "Ngid" => "0", "Pid" => "460751", "PPid" => "460699", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "460751\t1", "NSpid" => "460751\t1", "NSpgid" => "460751\t1", "NSsid" => "460751\t1", "VmPeak" => "749004 kB", "VmSize" => "749004 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "40772 kB", "VmRSS" => "40772 kB", "RssAnon" => "11292 kB", "RssFile" => "29480 kB", "RssShmem" => "0 kB", "VmData" => "109192 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "204 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "23", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "665", "nonvoluntary_ctxt_switches" => "17", "cmdline" => "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000"}, {"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "461427", "Ngid" => "0", "Pid" => "461427", "PPid" => "460751", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "461427\t42", "NSpid" => "461427\t42", "NSpgid" => "461421\t36", "NSsid" => "461421\t36", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0", "cmdline" => ""}] [2025-06-12 23:15:58] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:15:58] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:15:58] DEBUG -- CNTI-zombie_handled: status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "460751", "Ngid" => "0", "Pid" => "460751", "PPid" => "460699", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "460751\t1", "NSpid" => "460751\t1", "NSpgid" => "460751\t1", "NSsid" => "460751\t1", "VmPeak" => "749004 kB", "VmSize" => "749004 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "40772 kB", "VmRSS" => "40772 kB", "RssAnon" => "11292 kB", "RssFile" => "29480 kB", "RssShmem" => "0 kB", "VmData" => "109192 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "204 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "23", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "665", "nonvoluntary_ctxt_switches" => "17", "cmdline" => "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000"} [2025-06-12 23:15:58] INFO -- CNTI-zombie_handled: status cmdline: /coredns-conf/etc/coredns/Corefile [2025-06-12 23:15:58] INFO -- CNTI-zombie_handled: pid: 460751 [2025-06-12 23:15:58] INFO -- CNTI-zombie_handled: status name: coredns [2025-06-12 23:15:58] INFO -- CNTI-zombie_handled: state: S (sleeping) [2025-06-12 23:15:58] INFO -- CNTI-zombie_handled: (state =~ /zombie/): [2025-06-12 23:15:58] DEBUG -- CNTI-zombie_handled: status: {"Name" => "sleep", "State" => "Z (zombie)", "Tgid" => "461427", "Ngid" => "0", "Pid" => "461427", "PPid" => "460751", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "0", "Groups" => "0", "NStgid" => "461427\t42", "NSpid" => "461427\t42", "NSpgid" => "461421\t36", "NSsid" => "461421\t36", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000001000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "2", "nonvoluntary_ctxt_switches" => "0", "cmdline" => ""} [2025-06-12 23:15:58] INFO -- CNTI-zombie_handled: status cmdline: [2025-06-12 23:15:58] INFO -- CNTI-zombie_handled: pid: 461427 [2025-06-12 23:15:58] INFO -- CNTI-zombie_handled: status name: sleep [2025-06-12 23:15:58] INFO -- CNTI-zombie_handled: state: Z (zombie) [2025-06-12 23:15:58] INFO -- CNTI-zombie_handled: (state =~ /zombie/): 3 [2025-06-12 23:15:58] INFO -- CNTI-zombie_handled: zombies.all?(nil): false [2025-06-12 23:15:58] INFO -- CNTI: container_status_result.all?(true): false [2025-06-12 23:15:58] INFO -- CNTI: pod_resp.all?(true): false [2025-06-12 23:15:58] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-06-12 23:15:58] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: false [2025-06-12 23:15:58] INFO -- CNTI-zombie_handled: Shutting down container cedd3fc9d1b795e1c1438710acb669f4c5845c5b7ec24102b807f6fa7b6f9597 [2025-06-12 23:15:58] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:15:58] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:15:58] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:15:58] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:15:58] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:15:58] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:15:58] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} [2025-06-12 23:16:18] INFO -- CNTI-zombie_handled: Waiting for pod coredns-coredns-844775b496-pkwkj in namespace cnf-default to become Ready... [2025-06-12 23:16:18] INFO -- CNTI-KubectlClient.wait.wait_for_resource_availability: Waiting for pod/coredns-coredns-844775b496-pkwkj to be available [2025-06-12 23:16:18] INFO -- CNTI-KubectlClient.wait.wait_for_resource_availability: seconds elapsed while waiting: 0 [2025-06-12 23:16:21] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource pod/coredns-coredns-844775b496-pkwkj is ready [2025-06-12 23:16:21] DEBUG -- CNTI-KubectlClient.Get.pod_status: Get status of pod/coredns-coredns-844775b496-pkwkj* with field selector: [2025-06-12 23:16:21] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods ✖️ 🏆FAILED: [zombie_handled] Zombie not handled ⚖👀 [2025-06-12 23:16:21] INFO -- CNTI-KubectlClient.Get.pod_status: 'Ready' pods: coredns-coredns-844775b496-pkwkj [2025-06-12 23:16:21] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'zombie_handled' emoji: ⚖👀 [2025-06-12 23:16:21] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'zombie_handled' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:16:21] DEBUG -- CNTI-CNFManager.Points: Task: 'zombie_handled' type: essential [2025-06-12 23:16:21] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 0 points [2025-06-12 23:16:21] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'zombie_handled' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:16:21] DEBUG -- CNTI-CNFManager.Points: Task: 'zombie_handled' type: essential [2025-06-12 23:16:21] DEBUG -- CNTI-CNFManager.Points.upsert_task-zombie_handled: Task start time: 2025-06-12 23:15:40 UTC, end time: 2025-06-12 23:16:21 UTC [2025-06-12 23:16:21] INFO -- CNTI-CNFManager.Points.upsert_task-zombie_handled: Task: 'zombie_handled' has status: 'failed' and is awarded: 0 points.Runtime: 00:00:41.456682345 [2025-06-12 23:16:21] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:16:21] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:16:21] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:16:21] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:16:21] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:16:21] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:16:21] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:16:21] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:16:21] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [sig_term_handled] [2025-06-12 23:16:21] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:16:21] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:16:21] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:16:21] INFO -- CNTI-CNFManager.Task.task_runner.sig_term_handled: Starting test [2025-06-12 23:16:21] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-06-12 23:16:21] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:16:21] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:16:21] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:16:21] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:16:21] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-06-12 23:16:21] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-06-12 23:16:21] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:16:21] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-06-12 23:16:21] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:16:21] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:16:22] DEBUG -- CNTI-KubectlClient.Get.pods_by_resource_labels: Creating list of pods by resource: Deployment/coredns-coredns labels [2025-06-12 23:16:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:22] DEBUG -- CNTI-KubectlClient.Get.resource_spec_labels: Get labels of resource Deployment/coredns-coredns [2025-06-12 23:16:22] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:16:22] DEBUG -- CNTI-KubectlClient.Get.pods_by_labels: Creating list of pods that have labels: {"app.kubernetes.io/instance" => "coredns", "app.kubernetes.io/name" => "coredns", "k8s-app" => "coredns"} [2025-06-12 23:16:22] INFO -- CNTI-KubectlClient.Get.pods_by_labels: Matched 1 pods: coredns-coredns-844775b496-pkwkj [2025-06-12 23:16:22] INFO -- CNTI-KubectlClient.wait.wait_for_resource_availability: Waiting for pod/coredns-coredns-844775b496-pkwkj to be available [2025-06-12 23:16:22] INFO -- CNTI-KubectlClient.wait.wait_for_resource_availability: seconds elapsed while waiting: 0 [2025-06-12 23:16:25] DEBUG -- CNTI-KubectlClient.wait.resource_ready?: Checking if resource pod/coredns-coredns-844775b496-pkwkj is ready [2025-06-12 23:16:25] DEBUG -- CNTI-KubectlClient.Get.pod_status: Get status of pod/coredns-coredns-844775b496-pkwkj* with field selector: [2025-06-12 23:16:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:25] INFO -- CNTI-KubectlClient.Get.pod_status: 'Ready' pods: coredns-coredns-844775b496-pkwkj [2025-06-12 23:16:25] DEBUG -- CNTI-KubectlClient.Get.nodes_by_pod: Finding nodes with pod/coredns-coredns-844775b496-pkwkj [2025-06-12 23:16:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource nodes [2025-06-12 23:16:25] INFO -- CNTI-KubectlClient.Get.nodes_by_pod: Nodes with pod/coredns-coredns-844775b496-pkwkj list: v131-worker [2025-06-12 23:16:25] INFO -- CNTI: node_pid_by_container_id container_id: containerd://d56a9fde60d372727f9e36ddb103414de41713df1a21582ce65390b248f6487b [2025-06-12 23:16:25] INFO -- CNTI: parse_container_id container_id: containerd://d56a9fde60d372727f9e36ddb103414de41713df1a21582ce65390b248f6487b [2025-06-12 23:16:25] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:25] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:25] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:25] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:25] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:25] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: time="2025-06-12T23:16:25Z" level=warning msg="Config \"/etc/crictl.yaml\" does not exist, trying next: \"/usr/local/bin/crictl.yaml\"" time="2025-06-12T23:16:25Z" level=warning msg="runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead." [2025-06-12 23:16:25] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "{\n \"info\": {\n \"config\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"1\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"args\": [\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"envs\": [\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_UDP_53\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP\",\n \"value\": \"tcp://10.96.19.182:53\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT_HTTPS\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_HOST\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP\",\n \"value\": \"udp://10.96.19.182:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_PROTO\",\n \"value\": \"udp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_ADDR\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT\",\n \"value\": \"udp://10.96.19.182:53\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_UDP_ADDR\",\n \"value\": \"10.96.19.182\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_HOST\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"KUBERNETES_SERVICE_PORT\",\n \"value\": \"443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_PROTO\",\n \"value\": \"tcp\"\n },\n {\n \"key\": \"COREDNS_COREDNS_PORT_53_TCP_PORT\",\n \"value\": \"53\"\n },\n {\n \"key\": \"KUBERNETES_PORT\",\n \"value\": \"tcp://10.96.0.1:443\"\n },\n {\n \"key\": \"KUBERNETES_PORT_443_TCP_ADDR\",\n \"value\": \"10.96.0.1\"\n },\n {\n \"key\": \"COREDNS_COREDNS_SERVICE_PORT_TCP_53\",\n \"value\": \"53\"\n }\n ],\n \"image\": {\n \"image\": \"sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f\",\n \"user_specified_image\": \"coredns/coredns:1.7.1\"\n },\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"linux\": {\n \"resources\": {\n \"cpu_period\": 100000,\n \"cpu_quota\": 10000,\n \"cpu_shares\": 102,\n \"hugepage_limits\": [\n {\n \"page_size\": \"2MB\"\n },\n {\n \"page_size\": \"1GB\"\n }\n ],\n \"memory_limit_in_bytes\": 134217728,\n \"memory_swap_limit_in_bytes\": 134217728,\n \"oom_score_adj\": -997\n },\n \"security_context\": {\n \"masked_paths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespace_options\": {\n \"pid\": 1\n },\n \"readonly_paths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"run_as_user\": {},\n \"seccomp\": {\n \"profile_type\": 1\n }\n }\n },\n \"log_path\": \"coredns/1.log\",\n \"metadata\": {\n \"attempt\": 1,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"container_path\": \"/etc/coredns\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"readonly\": true\n },\n {\n \"container_path\": \"/etc/hosts\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\"\n },\n {\n \"container_path\": \"/dev/termination-log\",\n \"host_path\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/da7136bd\"\n }\n ]\n },\n \"pid\": 461657,\n \"removing\": false,\n \"runtimeOptions\": {\n \"systemd_cgroup\": true\n },\n \"runtimeSpec\": {\n \"annotations\": {\n \"io.kubernetes.cri.container-name\": \"coredns\",\n \"io.kubernetes.cri.container-type\": \"container\",\n \"io.kubernetes.cri.image-name\": \"coredns/coredns:1.7.1\",\n \"io.kubernetes.cri.sandbox-id\": \"7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46\",\n \"io.kubernetes.cri.sandbox-name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.cri.sandbox-namespace\": \"cnf-default\",\n \"io.kubernetes.cri.sandbox-uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"hooks\": {\n \"createContainer\": [\n {\n \"path\": \"/kind/bin/mount-product-files.sh\"\n }\n ]\n },\n \"linux\": {\n \"cgroupsPath\": \"kubelet-kubepods-pod901ab6c0_a347_470b_9e7c_7803c9ca1d7b.slice:cri-containerd:d56a9fde60d372727f9e36ddb103414de41713df1a21582ce65390b248f6487b\",\n \"maskedPaths\": [\n \"/proc/asound\",\n \"/proc/acpi\",\n \"/proc/kcore\",\n \"/proc/keys\",\n \"/proc/latency_stats\",\n \"/proc/timer_list\",\n \"/proc/timer_stats\",\n \"/proc/sched_debug\",\n \"/proc/scsi\",\n \"/sys/firmware\",\n \"/sys/devices/virtual/powercap\"\n ],\n \"namespaces\": [\n {\n \"type\": \"pid\"\n },\n {\n \"path\": \"/proc/460724/ns/ipc\",\n \"type\": \"ipc\"\n },\n {\n \"path\": \"/proc/460724/ns/uts\",\n \"type\": \"uts\"\n },\n {\n \"type\": \"mount\"\n },\n {\n \"path\": \"/proc/460724/ns/net\",\n \"type\": \"network\"\n }\n ],\n \"readonlyPaths\": [\n \"/proc/bus\",\n \"/proc/fs\",\n \"/proc/irq\",\n \"/proc/sys\",\n \"/proc/sysrq-trigger\"\n ],\n \"resources\": {\n \"cpu\": {\n \"period\": 100000,\n \"quota\": 10000,\n \"shares\": 102\n },\n \"devices\": [\n {\n \"access\": \"rwm\",\n \"allow\": false\n }\n ],\n \"memory\": {\n \"limit\": 134217728,\n \"swap\": 134217728\n }\n }\n },\n \"mounts\": [\n {\n \"destination\": \"/proc\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"proc\",\n \"type\": \"proc\"\n },\n {\n \"destination\": \"/dev\",\n \"options\": [\n \"nosuid\",\n \"strictatime\",\n \"mode=755\",\n \"size=65536k\"\n ],\n \"source\": \"tmpfs\",\n \"type\": \"tmpfs\"\n },\n {\n \"destination\": \"/dev/pts\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"newinstance\",\n \"ptmxmode=0666\",\n \"mode=0620\",\n \"gid=5\"\n ],\n \"source\": \"devpts\",\n \"type\": \"devpts\"\n },\n {\n \"destination\": \"/dev/mqueue\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\"\n ],\n \"source\": \"mqueue\",\n \"type\": \"mqueue\"\n },\n {\n \"destination\": \"/sys\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"ro\"\n ],\n \"source\": \"sysfs\",\n \"type\": \"sysfs\"\n },\n {\n \"destination\": \"/sys/fs/cgroup\",\n \"options\": [\n \"nosuid\",\n \"noexec\",\n \"nodev\",\n \"relatime\",\n \"ro\"\n ],\n \"source\": \"cgroup\",\n \"type\": \"cgroup\"\n },\n {\n \"destination\": \"/etc/coredns\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hosts\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/termination-log\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/da7136bd\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/hostname\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/hostname\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/etc/resolv.conf\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/resolv.conf\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/dev/shm\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"rw\"\n ],\n \"source\": \"/run/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/shm\",\n \"type\": \"bind\"\n },\n {\n \"destination\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"options\": [\n \"rbind\",\n \"rprivate\",\n \"ro\"\n ],\n \"source\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"type\": \"bind\"\n }\n ],\n \"ociVersion\": \"1.2.1\",\n \"process\": {\n \"args\": [\n \"/coredns\",\n \"-conf\",\n \"/etc/coredns/Corefile\"\n ],\n \"capabilities\": {\n \"bounding\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"effective\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ],\n \"permitted\": [\n \"CAP_CHOWN\",\n \"CAP_DAC_OVERRIDE\",\n \"CAP_FSETID\",\n \"CAP_FOWNER\",\n \"CAP_MKNOD\",\n \"CAP_NET_RAW\",\n \"CAP_SETGID\",\n \"CAP_SETUID\",\n \"CAP_SETFCAP\",\n \"CAP_SETPCAP\",\n \"CAP_NET_BIND_SERVICE\",\n \"CAP_SYS_CHROOT\",\n \"CAP_KILL\",\n \"CAP_AUDIT_WRITE\"\n ]\n },\n \"cwd\": \"/\",\n \"env\": [\n \"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\n \"HOSTNAME=coredns-coredns-844775b496-pkwkj\",\n \"COREDNS_COREDNS_SERVICE_PORT_UDP_53=53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.19.182:53\",\n \"KUBERNETES_SERVICE_PORT_HTTPS=443\",\n \"COREDNS_COREDNS_SERVICE_HOST=10.96.19.182\",\n \"COREDNS_COREDNS_SERVICE_PORT=53\",\n \"COREDNS_COREDNS_PORT_53_UDP=udp://10.96.19.182:53\",\n \"COREDNS_COREDNS_PORT_53_UDP_PROTO=udp\",\n \"COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.19.182\",\n \"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_PORT=443\",\n \"COREDNS_COREDNS_PORT=udp://10.96.19.182:53\",\n \"COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.19.182\",\n \"COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp\",\n \"KUBERNETES_SERVICE_HOST=10.96.0.1\",\n \"KUBERNETES_SERVICE_PORT=443\",\n \"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\n \"COREDNS_COREDNS_PORT_53_TCP_PORT=53\",\n \"KUBERNETES_PORT=tcp://10.96.0.1:443\",\n \"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\n \"COREDNS_COREDNS_SERVICE_PORT_TCP_53=53\"\n ],\n \"oomScoreAdj\": -997,\n \"user\": {\n \"additionalGids\": [\n 0\n ],\n \"gid\": 0,\n \"uid\": 0\n }\n },\n \"root\": {\n \"path\": \"rootfs\"\n }\n },\n \"runtimeType\": \"io.containerd.runc.v2\",\n \"sandboxID\": \"7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46\",\n \"snapshotKey\": \"d56a9fde60d372727f9e36ddb103414de41713df1a21582ce65390b248f6487b\",\n \"snapshotter\": \"overlayfs\"\n },\n \"status\": {\n \"annotations\": {\n \"io.kubernetes.container.hash\": \"30544dd1\",\n \"io.kubernetes.container.ports\": \"[{\\\"name\\\":\\\"udp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"tcp-53\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}]\",\n \"io.kubernetes.container.restartCount\": \"1\",\n \"io.kubernetes.container.terminationMessagePath\": \"/dev/termination-log\",\n \"io.kubernetes.container.terminationMessagePolicy\": \"File\",\n \"io.kubernetes.pod.terminationGracePeriod\": \"30\"\n },\n \"createdAt\": \"2025-06-12T23:15:59.160086629Z\",\n \"exitCode\": 0,\n \"finishedAt\": \"0001-01-01T00:00:00Z\",\n \"id\": \"d56a9fde60d372727f9e36ddb103414de41713df1a21582ce65390b248f6487b\",\n \"image\": {\n \"annotations\": {},\n \"image\": \"docker.io/coredns/coredns:1.7.1\",\n \"runtimeHandler\": \"\",\n \"userSpecifiedImage\": \"\"\n },\n \"imageId\": \"\",\n \"imageRef\": \"docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef\",\n \"labels\": {\n \"io.kubernetes.container.name\": \"coredns\",\n \"io.kubernetes.pod.name\": \"coredns-coredns-844775b496-pkwkj\",\n \"io.kubernetes.pod.namespace\": \"cnf-default\",\n \"io.kubernetes.pod.uid\": \"901ab6c0-a347-470b-9e7c-7803c9ca1d7b\"\n },\n \"logPath\": \"/var/log/pods/cnf-default_coredns-coredns-844775b496-pkwkj_901ab6c0-a347-470b-9e7c-7803c9ca1d7b/coredns/1.log\",\n \"message\": \"\",\n \"metadata\": {\n \"attempt\": 1,\n \"name\": \"coredns\"\n },\n \"mounts\": [\n {\n \"containerPath\": \"/etc/coredns\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": true,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/etc/hosts\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n },\n {\n \"containerPath\": \"/dev/termination-log\",\n \"gidMappings\": [],\n \"hostPath\": \"/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/da7136bd\",\n \"propagation\": \"PROPAGATION_PRIVATE\",\n \"readonly\": false,\n \"recursiveReadOnly\": false,\n \"selinuxRelabel\": false,\n \"uidMappings\": []\n }\n ],\n \"reason\": \"\",\n \"resources\": {\n \"linux\": {\n \"cpuPeriod\": \"100000\",\n \"cpuQuota\": \"10000\",\n \"cpuShares\": \"102\",\n \"cpusetCpus\": \"\",\n \"cpusetMems\": \"\",\n \"hugepageLimits\": [],\n \"memoryLimitInBytes\": \"134217728\",\n \"memorySwapLimitInBytes\": \"134217728\",\n \"oomScoreAdj\": \"-997\",\n \"unified\": {}\n }\n },\n \"startedAt\": \"2025-06-12T23:16:00.695444821Z\",\n \"state\": \"CONTAINER_RUNNING\",\n \"user\": {\n \"linux\": {\n \"gid\": \"0\",\n \"supplementalGroups\": [\n \"0\"\n ],\n \"uid\": \"0\"\n }\n }\n }\n}\n", error: "time=\"2025-06-12T23:16:25Z\" level=warning msg=\"Config \\\"/etc/crictl.yaml\\\" does not exist, trying next: \\\"/usr/local/bin/crictl.yaml\\\"\"\ntime=\"2025-06-12T23:16:25Z\" level=warning msg=\"runtime connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.\"\n"} [2025-06-12 23:16:25] DEBUG -- CNTI: node_pid_by_container_id inspect: { "info": { "config": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "1", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "args": [ "-conf", "/etc/coredns/Corefile" ], "envs": [ { "key": "COREDNS_COREDNS_SERVICE_PORT_UDP_53", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_TCP", "value": "tcp://10.96.19.182:53" }, { "key": "KUBERNETES_SERVICE_PORT_HTTPS", "value": "443" }, { "key": "COREDNS_COREDNS_SERVICE_HOST", "value": "10.96.19.182" }, { "key": "COREDNS_COREDNS_SERVICE_PORT", "value": "53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP", "value": "udp://10.96.19.182:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_PROTO", "value": "udp" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_ADDR", "value": "10.96.19.182" }, { "key": "KUBERNETES_PORT_443_TCP", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_PORT", "value": "443" }, { "key": "COREDNS_COREDNS_PORT", "value": "udp://10.96.19.182:53" }, { "key": "COREDNS_COREDNS_PORT_53_UDP_ADDR", "value": "10.96.19.182" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PROTO", "value": "tcp" }, { "key": "KUBERNETES_SERVICE_HOST", "value": "10.96.0.1" }, { "key": "KUBERNETES_SERVICE_PORT", "value": "443" }, { "key": "KUBERNETES_PORT_443_TCP_PROTO", "value": "tcp" }, { "key": "COREDNS_COREDNS_PORT_53_TCP_PORT", "value": "53" }, { "key": "KUBERNETES_PORT", "value": "tcp://10.96.0.1:443" }, { "key": "KUBERNETES_PORT_443_TCP_ADDR", "value": "10.96.0.1" }, { "key": "COREDNS_COREDNS_SERVICE_PORT_TCP_53", "value": "53" } ], "image": { "image": "sha256:0a6cfbf7b0b6606f404f703a3ce24f3f637437b2d06d38008c033c42a2860f5f", "user_specified_image": "coredns/coredns:1.7.1" }, "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "linux": { "resources": { "cpu_period": 100000, "cpu_quota": 10000, "cpu_shares": 102, "hugepage_limits": [ { "page_size": "2MB" }, { "page_size": "1GB" } ], "memory_limit_in_bytes": 134217728, "memory_swap_limit_in_bytes": 134217728, "oom_score_adj": -997 }, "security_context": { "masked_paths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespace_options": { "pid": 1 }, "readonly_paths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "run_as_user": {}, "seccomp": { "profile_type": 1 } } }, "log_path": "coredns/1.log", "metadata": { "attempt": 1, "name": "coredns" }, "mounts": [ { "container_path": "/etc/coredns", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "readonly": true }, { "container_path": "/var/run/secrets/kubernetes.io/serviceaccount", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "readonly": true }, { "container_path": "/etc/hosts", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts" }, { "container_path": "/dev/termination-log", "host_path": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/da7136bd" } ] }, "pid": 461657, "removing": false, "runtimeOptions": { "systemd_cgroup": true }, "runtimeSpec": { "annotations": { "io.kubernetes.cri.container-name": "coredns", "io.kubernetes.cri.container-type": "container", "io.kubernetes.cri.image-name": "coredns/coredns:1.7.1", "io.kubernetes.cri.sandbox-id": "7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46", "io.kubernetes.cri.sandbox-name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.cri.sandbox-namespace": "cnf-default", "io.kubernetes.cri.sandbox-uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "hooks": { "createContainer": [ { "path": "/kind/bin/mount-product-files.sh" } ] }, "linux": { "cgroupsPath": "kubelet-kubepods-pod901ab6c0_a347_470b_9e7c_7803c9ca1d7b.slice:cri-containerd:d56a9fde60d372727f9e36ddb103414de41713df1a21582ce65390b248f6487b", "maskedPaths": [ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware", "/sys/devices/virtual/powercap" ], "namespaces": [ { "type": "pid" }, { "path": "/proc/460724/ns/ipc", "type": "ipc" }, { "path": "/proc/460724/ns/uts", "type": "uts" }, { "type": "mount" }, { "path": "/proc/460724/ns/net", "type": "network" } ], "readonlyPaths": [ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ], "resources": { "cpu": { "period": 100000, "quota": 10000, "shares": 102 }, "devices": [ { "access": "rwm", "allow": false } ], "memory": { "limit": 134217728, "swap": 134217728 } } }, "mounts": [ { "destination": "/proc", "options": [ "nosuid", "noexec", "nodev" ], "source": "proc", "type": "proc" }, { "destination": "/dev", "options": [ "nosuid", "strictatime", "mode=755", "size=65536k" ], "source": "tmpfs", "type": "tmpfs" }, { "destination": "/dev/pts", "options": [ "nosuid", "noexec", "newinstance", "ptmxmode=0666", "mode=0620", "gid=5" ], "source": "devpts", "type": "devpts" }, { "destination": "/dev/mqueue", "options": [ "nosuid", "noexec", "nodev" ], "source": "mqueue", "type": "mqueue" }, { "destination": "/sys", "options": [ "nosuid", "noexec", "nodev", "ro" ], "source": "sysfs", "type": "sysfs" }, { "destination": "/sys/fs/cgroup", "options": [ "nosuid", "noexec", "nodev", "relatime", "ro" ], "source": "cgroup", "type": "cgroup" }, { "destination": "/etc/coredns", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "type": "bind" }, { "destination": "/etc/hosts", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts", "type": "bind" }, { "destination": "/dev/termination-log", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/da7136bd", "type": "bind" }, { "destination": "/etc/hostname", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/hostname", "type": "bind" }, { "destination": "/etc/resolv.conf", "options": [ "rbind", "rprivate", "rw" ], "source": "/var/lib/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/resolv.conf", "type": "bind" }, { "destination": "/dev/shm", "options": [ "rbind", "rprivate", "rw" ], "source": "/run/containerd/io.containerd.grpc.v1.cri/sandboxes/7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46/shm", "type": "bind" }, { "destination": "/var/run/secrets/kubernetes.io/serviceaccount", "options": [ "rbind", "rprivate", "ro" ], "source": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "type": "bind" } ], "ociVersion": "1.2.1", "process": { "args": [ "/coredns", "-conf", "/etc/coredns/Corefile" ], "capabilities": { "bounding": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "effective": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ], "permitted": [ "CAP_CHOWN", "CAP_DAC_OVERRIDE", "CAP_FSETID", "CAP_FOWNER", "CAP_MKNOD", "CAP_NET_RAW", "CAP_SETGID", "CAP_SETUID", "CAP_SETFCAP", "CAP_SETPCAP", "CAP_NET_BIND_SERVICE", "CAP_SYS_CHROOT", "CAP_KILL", "CAP_AUDIT_WRITE" ] }, "cwd": "/", "env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "HOSTNAME=coredns-coredns-844775b496-pkwkj", "COREDNS_COREDNS_SERVICE_PORT_UDP_53=53", "COREDNS_COREDNS_PORT_53_UDP_PORT=53", "COREDNS_COREDNS_PORT_53_TCP=tcp://10.96.19.182:53", "KUBERNETES_SERVICE_PORT_HTTPS=443", "COREDNS_COREDNS_SERVICE_HOST=10.96.19.182", "COREDNS_COREDNS_SERVICE_PORT=53", "COREDNS_COREDNS_PORT_53_UDP=udp://10.96.19.182:53", "COREDNS_COREDNS_PORT_53_UDP_PROTO=udp", "COREDNS_COREDNS_PORT_53_TCP_ADDR=10.96.19.182", "KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_PORT=443", "COREDNS_COREDNS_PORT=udp://10.96.19.182:53", "COREDNS_COREDNS_PORT_53_UDP_ADDR=10.96.19.182", "COREDNS_COREDNS_PORT_53_TCP_PROTO=tcp", "KUBERNETES_SERVICE_HOST=10.96.0.1", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT_443_TCP_PROTO=tcp", "COREDNS_COREDNS_PORT_53_TCP_PORT=53", "KUBERNETES_PORT=tcp://10.96.0.1:443", "KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1", "COREDNS_COREDNS_SERVICE_PORT_TCP_53=53" ], "oomScoreAdj": -997, "user": { "additionalGids": [ 0 ], "gid": 0, "uid": 0 } }, "root": { "path": "rootfs" } }, "runtimeType": "io.containerd.runc.v2", "sandboxID": "7c562390a121dee9b5a54d792eb297c1347333c943fc2b1996598ac3d49a5c46", "snapshotKey": "d56a9fde60d372727f9e36ddb103414de41713df1a21582ce65390b248f6487b", "snapshotter": "overlayfs" }, "status": { "annotations": { "io.kubernetes.container.hash": "30544dd1", "io.kubernetes.container.ports": "[{\"name\":\"udp-53\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"tcp-53\",\"containerPort\":53,\"protocol\":\"TCP\"}]", "io.kubernetes.container.restartCount": "1", "io.kubernetes.container.terminationMessagePath": "/dev/termination-log", "io.kubernetes.container.terminationMessagePolicy": "File", "io.kubernetes.pod.terminationGracePeriod": "30" }, "createdAt": "2025-06-12T23:15:59.160086629Z", "exitCode": 0, "finishedAt": "0001-01-01T00:00:00Z", "id": "d56a9fde60d372727f9e36ddb103414de41713df1a21582ce65390b248f6487b", "image": { "annotations": {}, "image": "docker.io/coredns/coredns:1.7.1", "runtimeHandler": "", "userSpecifiedImage": "" }, "imageId": "", "imageRef": "docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef", "labels": { "io.kubernetes.container.name": "coredns", "io.kubernetes.pod.name": "coredns-coredns-844775b496-pkwkj", "io.kubernetes.pod.namespace": "cnf-default", "io.kubernetes.pod.uid": "901ab6c0-a347-470b-9e7c-7803c9ca1d7b" }, "logPath": "/var/log/pods/cnf-default_coredns-coredns-844775b496-pkwkj_901ab6c0-a347-470b-9e7c-7803c9ca1d7b/coredns/1.log", "message": "", "metadata": { "attempt": 1, "name": "coredns" }, "mounts": [ { "containerPath": "/etc/coredns", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~configmap/config-volume", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/var/run/secrets/kubernetes.io/serviceaccount", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/volumes/kubernetes.io~projected/kube-api-access-6jswd", "propagation": "PROPAGATION_PRIVATE", "readonly": true, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/etc/hosts", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/etc-hosts", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] }, { "containerPath": "/dev/termination-log", "gidMappings": [], "hostPath": "/var/lib/kubelet/pods/901ab6c0-a347-470b-9e7c-7803c9ca1d7b/containers/coredns/da7136bd", "propagation": "PROPAGATION_PRIVATE", "readonly": false, "recursiveReadOnly": false, "selinuxRelabel": false, "uidMappings": [] } ], "reason": "", "resources": { "linux": { "cpuPeriod": "100000", "cpuQuota": "10000", "cpuShares": "102", "cpusetCpus": "", "cpusetMems": "", "hugepageLimits": [], "memoryLimitInBytes": "134217728", "memorySwapLimitInBytes": "134217728", "oomScoreAdj": "-997", "unified": {} } }, "startedAt": "2025-06-12T23:16:00.695444821Z", "state": "CONTAINER_RUNNING", "user": { "linux": { "gid": "0", "supplementalGroups": [ "0" ], "uid": "0" } } } } [2025-06-12 23:16:25] INFO -- CNTI: node_pid_by_container_id pid: 461657 [2025-06-12 23:16:25] INFO -- CNTI: pids [2025-06-12 23:16:25] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:25] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:25] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:26] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:26] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:26] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:26] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "1\n1751\n180\n196\n306\n394\n398\n442\n449\n459066\n459091\n459330\n459856\n459880\n460026\n460699\n460724\n461657\n461846\n497\n689\n801\n826\n858\nacpi\nbootconfig\nbuddyinfo\nbus\ncgroups\ncmdline\nconsoles\ncpuinfo\ncrypto\ndevices\ndiskstats\ndma\ndriver\ndynamic_debug\nexecdomains\nfb\nfilesystems\nfs\ninterrupts\niomem\nioports\nirq\nkallsyms\nkcore\nkey-users\nkeys\nkmsg\nkpagecgroup\nkpagecount\nkpageflags\nloadavg\nlocks\nmdstat\nmeminfo\nmisc\nmodules\nmounts\nmtrr\nnet\npagetypeinfo\npartitions\npressure\nschedstat\nscsi\nself\nslabinfo\nsoftirqs\nstat\nswaps\nsys\nsysrq-trigger\nsysvipc\nthread-self\ntimer_list\ntty\nuptime\nversion\nversion_signature\nvmallocinfo\nvmstat\nzoneinfo\n", error: ""} [2025-06-12 23:16:26] INFO -- CNTI: pids ls_proc: {status: Process::Status[0], output: "1\n1751\n180\n196\n306\n394\n398\n442\n449\n459066\n459091\n459330\n459856\n459880\n460026\n460699\n460724\n461657\n461846\n497\n689\n801\n826\n858\nacpi\nbootconfig\nbuddyinfo\nbus\ncgroups\ncmdline\nconsoles\ncpuinfo\ncrypto\ndevices\ndiskstats\ndma\ndriver\ndynamic_debug\nexecdomains\nfb\nfilesystems\nfs\ninterrupts\niomem\nioports\nirq\nkallsyms\nkcore\nkey-users\nkeys\nkmsg\nkpagecgroup\nkpagecount\nkpageflags\nloadavg\nlocks\nmdstat\nmeminfo\nmisc\nmodules\nmounts\nmtrr\nnet\npagetypeinfo\npartitions\npressure\nschedstat\nscsi\nself\nslabinfo\nsoftirqs\nstat\nswaps\nsys\nsysrq-trigger\nsysvipc\nthread-self\ntimer_list\ntty\nuptime\nversion\nversion_signature\nvmallocinfo\nvmstat\nzoneinfo\n", error: ""} [2025-06-12 23:16:26] DEBUG -- CNTI: parse_ls ls: 1 1751 180 196 306 394 398 442 449 459066 459091 459330 459856 459880 460026 460699 460724 461657 461846 497 689 801 826 858 acpi bootconfig buddyinfo bus cgroups cmdline consoles cpuinfo crypto devices diskstats dma driver dynamic_debug execdomains fb filesystems fs interrupts iomem ioports irq kallsyms kcore key-users keys kmsg kpagecgroup kpagecount kpageflags loadavg locks mdstat meminfo misc modules mounts mtrr net pagetypeinfo partitions pressure schedstat scsi self slabinfo softirqs stat swaps sys sysrq-trigger sysvipc thread-self timer_list tty uptime version version_signature vmallocinfo vmstat zoneinfo [2025-06-12 23:16:26] DEBUG -- CNTI: parse_ls parsed: ["1", "1751", "180", "196", "306", "394", "398", "442", "449", "459066", "459091", "459330", "459856", "459880", "460026", "460699", "460724", "461657", "461846", "497", "689", "801", "826", "858", "acpi", "bootconfig", "buddyinfo", "bus", "cgroups", "cmdline", "consoles", "cpuinfo", "crypto", "devices", "diskstats", "dma", "driver", "dynamic_debug", "execdomains", "fb", "filesystems", "fs", "interrupts", "iomem", "ioports", "irq", "kallsyms", "kcore", "key-users", "keys", "kmsg", "kpagecgroup", "kpagecount", "kpageflags", "loadavg", "locks", "mdstat", "meminfo", "misc", "modules", "mounts", "mtrr", "net", "pagetypeinfo", "partitions", "pressure", "schedstat", "scsi", "self", "slabinfo", "softirqs", "stat", "swaps", "sys", "sysrq-trigger", "sysvipc", "thread-self", "timer_list", "tty", "uptime", "version", "version_signature", "vmallocinfo", "vmstat", "zoneinfo"] [2025-06-12 23:16:26] DEBUG -- CNTI: pids_from_ls_proc ls: ["1", "1751", "180", "196", "306", "394", "398", "442", "449", "459066", "459091", "459330", "459856", "459880", "460026", "460699", "460724", "461657", "461846", "497", "689", "801", "826", "858", "acpi", "bootconfig", "buddyinfo", "bus", "cgroups", "cmdline", "consoles", "cpuinfo", "crypto", "devices", "diskstats", "dma", "driver", "dynamic_debug", "execdomains", "fb", "filesystems", "fs", "interrupts", "iomem", "ioports", "irq", "kallsyms", "kcore", "key-users", "keys", "kmsg", "kpagecgroup", "kpagecount", "kpageflags", "loadavg", "locks", "mdstat", "meminfo", "misc", "modules", "mounts", "mtrr", "net", "pagetypeinfo", "partitions", "pressure", "schedstat", "scsi", "self", "slabinfo", "softirqs", "stat", "swaps", "sys", "sysrq-trigger", "sysvipc", "thread-self", "timer_list", "tty", "uptime", "version", "version_signature", "vmallocinfo", "vmstat", "zoneinfo"] [2025-06-12 23:16:26] DEBUG -- CNTI: pids_from_ls_proc pids: ["1", "1751", "180", "196", "306", "394", "398", "442", "449", "459066", "459091", "459330", "459856", "459880", "460026", "460699", "460724", "461657", "461846", "497", "689", "801", "826", "858"] [2025-06-12 23:16:26] INFO -- CNTI: all_statuses_by_pids [2025-06-12 23:16:26] INFO -- CNTI: all_statuses_by_pids pid: 1 [2025-06-12 23:16:26] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:26] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:26] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:26] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:26] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:26] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsystemd\nUmask:\t0000\nState:\tS (sleeping)\nTgid:\t1\nNgid:\t0\nPid:\t1\nPPid:\t0\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t1\nNSpid:\t1\nNSpgid:\t1\nNSsid:\t1\nVmPeak:\t 32188 kB\nVmSize:\t 31300 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 23396 kB\nVmRSS:\t 22548 kB\nRssAnon:\t 13964 kB\nRssFile:\t 8584 kB\nRssShmem:\t 0 kB\nVmData:\t 13292 kB\nVmStk:\t 132 kB\nVmExe:\t 40 kB\nVmLib:\t 10688 kB\nVmPTE:\t 96 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t7fe3c0fe28014a03\nSigIgn:\t0000000000001000\nSigCgt:\t00000000000004ec\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t113273\nnonvoluntary_ctxt_switches:\t5668\n", error: ""} [2025-06-12 23:16:26] INFO -- CNTI: all_statuses_by_pids pid: 1751 [2025-06-12 23:16:26] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:26] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:26] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:26] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:26] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:26] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:27] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t1751\nNgid:\t0\nPid:\t1751\nPPid:\t858\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t1751\t888\nNSpid:\t1751\t888\nNSpgid:\t858\t1\nNSsid:\t858\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1532 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 20 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 48 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-06-12 23:16:27] INFO -- CNTI: all_statuses_by_pids pid: 180 [2025-06-12 23:16:27] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:27] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:27] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:27] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:27] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:27] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsystemd-journal\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t180\nNgid:\t0\nPid:\t180\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t180\nNSpid:\t180\nNSpgid:\t180\nNSsid:\t180\nVmPeak:\t 147684 kB\nVmSize:\t 147684 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 113448 kB\nVmRSS:\t 113448 kB\nRssAnon:\t 1132 kB\nRssFile:\t 6836 kB\nRssShmem:\t 105480 kB\nVmData:\t 640 kB\nVmStk:\t 132 kB\nVmExe:\t 92 kB\nVmLib:\t 9736 kB\nVmPTE:\t 324 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000400004a02\nSigIgn:\t0000000000001000\nSigCgt:\t0000000000000040\nCapInh:\t0000000000000000\nCapPrm:\t00000025402800cf\nCapEff:\t00000025402800cf\nCapBnd:\t00000025402800cf\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t20\nSpeculation_Store_Bypass:\tthread force mitigated\nSpeculationIndirectBranch:\tconditional force disabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t88566\nnonvoluntary_ctxt_switches:\t239\n", error: ""} [2025-06-12 23:16:27] INFO -- CNTI: all_statuses_by_pids pid: 196 [2025-06-12 23:16:27] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:27] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:27] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:27] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:27] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:27] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t196\nNgid:\t0\nPid:\t196\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t1024\nGroups:\t0 \nNStgid:\t196\nNSpid:\t196\nNSpgid:\t196\nNSsid:\t196\nVmPeak:\t 7992632 kB\nVmSize:\t 7760548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 149992 kB\nVmRSS:\t 94980 kB\nRssAnon:\t 57388 kB\nRssFile:\t 37592 kB\nRssShmem:\t 0 kB\nVmData:\t 752892 kB\nVmStk:\t 132 kB\nVmExe:\t 18236 kB\nVmLib:\t 1524 kB\nVmPTE:\t 1160 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t65\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t124\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-06-12 23:16:27] INFO -- CNTI: all_statuses_by_pids pid: 306 [2025-06-12 23:16:27] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:27] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:27] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:27] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:27] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:27] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:28] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tkubelet\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t306\nNgid:\t558684\nPid:\t306\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t306\nNSpid:\t306\nNSpgid:\t306\nNSsid:\t306\nVmPeak:\t 7363536 kB\nVmSize:\t 7363536 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 108068 kB\nVmRSS:\t 100244 kB\nRssAnon:\t 61408 kB\nRssFile:\t 38836 kB\nRssShmem:\t 0 kB\nVmData:\t 812016 kB\nVmStk:\t 132 kB\nVmExe:\t 35360 kB\nVmLib:\t 1560 kB\nVmPTE:\t 1096 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t82\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t745048\nnonvoluntary_ctxt_switches:\t757\n", error: ""} [2025-06-12 23:16:28] INFO -- CNTI: all_statuses_by_pids pid: 394 [2025-06-12 23:16:28] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:28] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:28] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:28] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:28] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:28] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:28] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t394\nNgid:\t0\nPid:\t394\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t394\nNSpid:\t394\nNSpgid:\t394\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10260 kB\nVmRSS:\t 10012 kB\nRssAnon:\t 3280 kB\nRssFile:\t 6732 kB\nRssShmem:\t 0 kB\nVmData:\t 41016 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t8\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-06-12 23:16:28] INFO -- CNTI: all_statuses_by_pids pid: 398 [2025-06-12 23:16:28] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:28] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:28] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:28] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:28] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:28] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:29] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t398\nNgid:\t0\nPid:\t398\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t398\nNSpid:\t398\nNSpgid:\t398\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11268 kB\nVmRSS:\t 10920 kB\nRssAnon:\t 3420 kB\nRssFile:\t 7500 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 104 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t9\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-06-12 23:16:29] INFO -- CNTI: all_statuses_by_pids pid: 442 [2025-06-12 23:16:29] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:29] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:29] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:29] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:29] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:29] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:29] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t442\nNgid:\t0\nPid:\t442\nPPid:\t394\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t442\t1\nNSpid:\t442\t1\nNSpgid:\t442\t1\nNSsid:\t442\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t28\nnonvoluntary_ctxt_switches:\t8\n", error: ""} [2025-06-12 23:16:29] INFO -- CNTI: all_statuses_by_pids pid: 449 [2025-06-12 23:16:29] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:29] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:29] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:29] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:29] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:29] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:29] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t449\nNgid:\t0\nPid:\t449\nPPid:\t398\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t449\t1\nNSpid:\t449\t1\nNSpgid:\t449\t1\nNSsid:\t449\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t133\nnonvoluntary_ctxt_switches:\t10\n", error: ""} [2025-06-12 23:16:29] INFO -- CNTI: all_statuses_by_pids pid: 459066 [2025-06-12 23:16:29] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:29] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:29] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:29] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:29] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:29] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:30] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459066\nNgid:\t0\nPid:\t459066\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t459066\nNSpid:\t459066\nNSpgid:\t459066\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11840 kB\nVmRSS:\t 11384 kB\nRssAnon:\t 3948 kB\nRssFile:\t 7436 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t38\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-06-12 23:16:30] INFO -- CNTI: all_statuses_by_pids pid: 459091 [2025-06-12 23:16:30] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:30] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:30] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:30] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:30] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:30] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:30] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459091\nNgid:\t0\nPid:\t459091\nPPid:\t459066\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t459091\nNSpid:\t459091\nNSpgid:\t459091\nNSsid:\t459091\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t29\nnonvoluntary_ctxt_switches:\t7\n", error: ""} [2025-06-12 23:16:30] INFO -- CNTI: all_statuses_by_pids pid: 459330 [2025-06-12 23:16:30] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:30] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:30] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:30] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:30] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:30] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:30] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459330\nNgid:\t0\nPid:\t459330\nPPid:\t459066\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t459330\nNSpid:\t459330\nNSpgid:\t459330\nNSsid:\t459330\nVmPeak:\t 2488 kB\nVmSize:\t 2488 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 924 kB\nVmRSS:\t 924 kB\nRssAnon:\t 88 kB\nRssFile:\t 836 kB\nRssShmem:\t 0 kB\nVmData:\t 224 kB\nVmStk:\t 132 kB\nVmExe:\t 20 kB\nVmLib:\t 1524 kB\nVmPTE:\t 44 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t45\nnonvoluntary_ctxt_switches:\t8\n", error: ""} [2025-06-12 23:16:30] INFO -- CNTI: all_statuses_by_pids pid: 459856 [2025-06-12 23:16:30] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:30] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:30] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:31] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:31] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:31] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:31] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459856\nNgid:\t0\nPid:\t459856\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t459856\nNSpid:\t459856\nNSpgid:\t459856\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10468 kB\nVmRSS:\t 10420 kB\nRssAnon:\t 3316 kB\nRssFile:\t 7104 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 112 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t11\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t9\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-06-12 23:16:31] INFO -- CNTI: all_statuses_by_pids pid: 459880 [2025-06-12 23:16:31] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:31] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:31] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:31] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:31] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:31] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:31] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459880\nNgid:\t0\nPid:\t459880\nPPid:\t459856\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t459880\t1\nNSpid:\t459880\t1\nNSpgid:\t459880\t1\nNSsid:\t459880\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t29\nnonvoluntary_ctxt_switches:\t8\n", error: ""} [2025-06-12 23:16:31] INFO -- CNTI: all_statuses_by_pids pid: 460026 [2025-06-12 23:16:31] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:31] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:31] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:31] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:31] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:31] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:32] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tchaos-operator\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460026\nNgid:\t0\nPid:\t460026\nPPid:\t459856\nTracerPid:\t0\nUid:\t1000\t1000\t1000\t1000\nGid:\t1000\t1000\t1000\t1000\nFDSize:\t64\nGroups:\t1000 \nNStgid:\t460026\t1\nNSpid:\t460026\t1\nNSpgid:\t460026\t1\nNSsid:\t460026\t1\nVmPeak:\t 1262188 kB\nVmSize:\t 1262188 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 37776 kB\nVmRSS:\t 37776 kB\nRssAnon:\t 14556 kB\nRssFile:\t 23220 kB\nRssShmem:\t 0 kB\nVmData:\t 67012 kB\nVmStk:\t 132 kB\nVmExe:\t 15232 kB\nVmLib:\t 8 kB\nVmPTE:\t 192 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t34\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t312\nnonvoluntary_ctxt_switches:\t8\n", error: ""} [2025-06-12 23:16:32] INFO -- CNTI: all_statuses_by_pids pid: 460699 [2025-06-12 23:16:32] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:32] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:32] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:32] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:32] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:32] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:32] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460699\nNgid:\t0\nPid:\t460699\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t460699\nNSpid:\t460699\nNSpgid:\t460699\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10688 kB\nVmRSS:\t 10440 kB\nRssAnon:\t 3388 kB\nRssFile:\t 7052 kB\nRssShmem:\t 0 kB\nVmData:\t 41016 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t6/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t10\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-06-12 23:16:32] INFO -- CNTI: all_statuses_by_pids pid: 460724 [2025-06-12 23:16:32] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:32] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:32] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:32] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:32] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:32] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:32] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460724\nNgid:\t0\nPid:\t460724\nPPid:\t460699\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t460724\t1\nNSpid:\t460724\t1\nNSpgid:\t460724\t1\nNSsid:\t460724\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t33\nnonvoluntary_ctxt_switches:\t14\n", error: ""} [2025-06-12 23:16:32] INFO -- CNTI: all_statuses_by_pids pid: 461657 [2025-06-12 23:16:32] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:32] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:32] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:32] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:32] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:32] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:33] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t461657\nNgid:\t0\nPid:\t461657\nPPid:\t460699\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t461657\t1\nNSpid:\t461657\t1\nNSpgid:\t461657\t1\nNSsid:\t461657\t1\nVmPeak:\t 748236 kB\nVmSize:\t 748236 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 38116 kB\nVmRSS:\t 38116 kB\nRssAnon:\t 10256 kB\nRssFile:\t 27860 kB\nRssShmem:\t 0 kB\nVmData:\t 108424 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 184 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t20\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t213\nnonvoluntary_ctxt_switches:\t15\n", error: ""} [2025-06-12 23:16:33] INFO -- CNTI: all_statuses_by_pids pid: 461846 [2025-06-12 23:16:33] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:33] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:33] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:33] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:33] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:33] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:33] WARN -- CNTI-KubectlClient.Utils.exec.cmd: stderr: cat: /proc/461846/status: No such file or directory command terminated with exit code 1 [2025-06-12 23:16:33] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[1], output: "", error: "cat: /proc/461846/status: No such file or directory\ncommand terminated with exit code 1\n"} [2025-06-12 23:16:33] INFO -- CNTI: all_statuses_by_pids pid: 497 [2025-06-12 23:16:33] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:33] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:33] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:33] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:33] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:33] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:34] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tkube-proxy\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t497\nNgid:\t559558\nPid:\t497\nPPid:\t394\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t497\t1\nNSpid:\t497\t1\nNSpgid:\t497\t1\nNSsid:\t497\t1\nVmPeak:\t 1296940 kB\nVmSize:\t 1296940 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 57204 kB\nVmRSS:\t 26228 kB\nRssAnon:\t 15088 kB\nRssFile:\t 11140 kB\nRssShmem:\t 0 kB\nVmData:\t 70032 kB\nVmStk:\t 132 kB\nVmExe:\t 29500 kB\nVmLib:\t 8 kB\nVmPTE:\t 276 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t32\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t17876\nnonvoluntary_ctxt_switches:\t64\n", error: ""} [2025-06-12 23:16:34] INFO -- CNTI: all_statuses_by_pids pid: 689 [2025-06-12 23:16:34] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:34] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:34] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:34] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:34] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:34] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:34] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tkindnetd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t689\nNgid:\t0\nPid:\t689\nPPid:\t398\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t689\t1\nNSpid:\t689\t1\nNSpgid:\t689\t1\nNSsid:\t689\t1\nVmPeak:\t 1285960 kB\nVmSize:\t 1285960 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 50012 kB\nVmRSS:\t 23788 kB\nRssAnon:\t 13332 kB\nRssFile:\t 10456 kB\nRssShmem:\t 0 kB\nVmData:\t 64400 kB\nVmStk:\t 132 kB\nVmExe:\t 25108 kB\nVmLib:\t 8 kB\nVmPTE:\t 260 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t36\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba3a00\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80435fb\nCapEff:\t00000000a80435fb\nCapBnd:\t00000000a80435fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t537\nnonvoluntary_ctxt_switches:\t13\n", error: ""} [2025-06-12 23:16:34] INFO -- CNTI: all_statuses_by_pids pid: 801 [2025-06-12 23:16:34] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:34] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:34] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:34] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:34] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:34] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:34] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t801\nNgid:\t0\nPid:\t801\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t801\nNSpid:\t801\nNSpgid:\t801\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10272 kB\nVmRSS:\t 9948 kB\nRssAnon:\t 3112 kB\nRssFile:\t 6836 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 116 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t7\nnonvoluntary_ctxt_switches:\t0\n", error: ""} [2025-06-12 23:16:34] INFO -- CNTI: all_statuses_by_pids pid: 826 [2025-06-12 23:16:34] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:34] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:34] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:34] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:34] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:34] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:35] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t826\nNgid:\t0\nPid:\t826\nPPid:\t801\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t826\t1\nNSpid:\t826\t1\nNSpgid:\t826\t1\nNSsid:\t826\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t26\nnonvoluntary_ctxt_switches:\t7\n", error: ""} [2025-06-12 23:16:35] INFO -- CNTI: all_statuses_by_pids pid: 858 [2025-06-12 23:16:35] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:35] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:35] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:35] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:35] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:35] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:35] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "Name:\tsh\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t858\nNgid:\t0\nPid:\t858\nPPid:\t801\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t858\t1\nNSpid:\t858\t1\nNSpgid:\t858\t1\nNSsid:\t858\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1564 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 972 kB\nVmRSS:\t 84 kB\nRssAnon:\t 80 kB\nRssFile:\t 4 kB\nRssShmem:\t 0 kB\nVmData:\t 52 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 48 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000004\nSigCgt:\t0000000000010002\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t916\nnonvoluntary_ctxt_switches:\t7\n", error: ""} [2025-06-12 23:16:35] DEBUG -- CNTI: proc process_statuses_by_node: ["Name:\tsystemd\nUmask:\t0000\nState:\tS (sleeping)\nTgid:\t1\nNgid:\t0\nPid:\t1\nPPid:\t0\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t1\nNSpid:\t1\nNSpgid:\t1\nNSsid:\t1\nVmPeak:\t 32188 kB\nVmSize:\t 31300 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 23396 kB\nVmRSS:\t 22548 kB\nRssAnon:\t 13964 kB\nRssFile:\t 8584 kB\nRssShmem:\t 0 kB\nVmData:\t 13292 kB\nVmStk:\t 132 kB\nVmExe:\t 40 kB\nVmLib:\t 10688 kB\nVmPTE:\t 96 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t7fe3c0fe28014a03\nSigIgn:\t0000000000001000\nSigCgt:\t00000000000004ec\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t113273\nnonvoluntary_ctxt_switches:\t5668\n", "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t1751\nNgid:\t0\nPid:\t1751\nPPid:\t858\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t1751\t888\nNSpid:\t1751\t888\nNSpgid:\t858\t1\nNSsid:\t858\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1532 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 20 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 48 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tsystemd-journal\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t180\nNgid:\t0\nPid:\t180\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t180\nNSpid:\t180\nNSpgid:\t180\nNSsid:\t180\nVmPeak:\t 147684 kB\nVmSize:\t 147684 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 113448 kB\nVmRSS:\t 113448 kB\nRssAnon:\t 1132 kB\nRssFile:\t 6836 kB\nRssShmem:\t 105480 kB\nVmData:\t 640 kB\nVmStk:\t 132 kB\nVmExe:\t 92 kB\nVmLib:\t 9736 kB\nVmPTE:\t 324 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000400004a02\nSigIgn:\t0000000000001000\nSigCgt:\t0000000000000040\nCapInh:\t0000000000000000\nCapPrm:\t00000025402800cf\nCapEff:\t00000025402800cf\nCapBnd:\t00000025402800cf\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t20\nSpeculation_Store_Bypass:\tthread force mitigated\nSpeculationIndirectBranch:\tconditional force disabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t88566\nnonvoluntary_ctxt_switches:\t239\n", "Name:\tcontainerd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t196\nNgid:\t0\nPid:\t196\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t1024\nGroups:\t0 \nNStgid:\t196\nNSpid:\t196\nNSpgid:\t196\nNSsid:\t196\nVmPeak:\t 7992632 kB\nVmSize:\t 7760548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 149992 kB\nVmRSS:\t 94980 kB\nRssAnon:\t 57388 kB\nRssFile:\t 37592 kB\nRssShmem:\t 0 kB\nVmData:\t 752892 kB\nVmStk:\t 132 kB\nVmExe:\t 18236 kB\nVmLib:\t 1524 kB\nVmPTE:\t 1160 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t65\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t124\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tkubelet\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t306\nNgid:\t558684\nPid:\t306\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t306\nNSpid:\t306\nNSpgid:\t306\nNSsid:\t306\nVmPeak:\t 7363536 kB\nVmSize:\t 7363536 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 108068 kB\nVmRSS:\t 100244 kB\nRssAnon:\t 61408 kB\nRssFile:\t 38836 kB\nRssShmem:\t 0 kB\nVmData:\t 812016 kB\nVmStk:\t 132 kB\nVmExe:\t 35360 kB\nVmLib:\t 1560 kB\nVmPTE:\t 1096 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t82\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t745048\nnonvoluntary_ctxt_switches:\t757\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t394\nNgid:\t0\nPid:\t394\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t394\nNSpid:\t394\nNSpgid:\t394\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10260 kB\nVmRSS:\t 10012 kB\nRssAnon:\t 3280 kB\nRssFile:\t 6732 kB\nRssShmem:\t 0 kB\nVmData:\t 41016 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t8\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t398\nNgid:\t0\nPid:\t398\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t398\nNSpid:\t398\nNSpgid:\t398\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11268 kB\nVmRSS:\t 10920 kB\nRssAnon:\t 3420 kB\nRssFile:\t 7500 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 104 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t9\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t442\nNgid:\t0\nPid:\t442\nPPid:\t394\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t442\t1\nNSpid:\t442\t1\nNSpgid:\t442\t1\nNSsid:\t442\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t28\nnonvoluntary_ctxt_switches:\t8\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t449\nNgid:\t0\nPid:\t449\nPPid:\t398\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t449\t1\nNSpid:\t449\t1\nNSpgid:\t449\t1\nNSsid:\t449\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t133\nnonvoluntary_ctxt_switches:\t10\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459066\nNgid:\t0\nPid:\t459066\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t459066\nNSpid:\t459066\nNSpgid:\t459066\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11840 kB\nVmRSS:\t 11384 kB\nRssAnon:\t 3948 kB\nRssFile:\t 7436 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t38\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459091\nNgid:\t0\nPid:\t459091\nPPid:\t459066\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t459091\nNSpid:\t459091\nNSpgid:\t459091\nNSsid:\t459091\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t29\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459330\nNgid:\t0\nPid:\t459330\nPPid:\t459066\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t459330\nNSpid:\t459330\nNSpgid:\t459330\nNSsid:\t459330\nVmPeak:\t 2488 kB\nVmSize:\t 2488 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 924 kB\nVmRSS:\t 924 kB\nRssAnon:\t 88 kB\nRssFile:\t 836 kB\nRssShmem:\t 0 kB\nVmData:\t 224 kB\nVmStk:\t 132 kB\nVmExe:\t 20 kB\nVmLib:\t 1524 kB\nVmPTE:\t 44 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t45\nnonvoluntary_ctxt_switches:\t8\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459856\nNgid:\t0\nPid:\t459856\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t459856\nNSpid:\t459856\nNSpgid:\t459856\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10468 kB\nVmRSS:\t 10420 kB\nRssAnon:\t 3316 kB\nRssFile:\t 7104 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 112 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t11\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t9\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459880\nNgid:\t0\nPid:\t459880\nPPid:\t459856\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t459880\t1\nNSpid:\t459880\t1\nNSpgid:\t459880\t1\nNSsid:\t459880\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t29\nnonvoluntary_ctxt_switches:\t8\n", "Name:\tchaos-operator\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460026\nNgid:\t0\nPid:\t460026\nPPid:\t459856\nTracerPid:\t0\nUid:\t1000\t1000\t1000\t1000\nGid:\t1000\t1000\t1000\t1000\nFDSize:\t64\nGroups:\t1000 \nNStgid:\t460026\t1\nNSpid:\t460026\t1\nNSpgid:\t460026\t1\nNSsid:\t460026\t1\nVmPeak:\t 1262188 kB\nVmSize:\t 1262188 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 37776 kB\nVmRSS:\t 37776 kB\nRssAnon:\t 14556 kB\nRssFile:\t 23220 kB\nRssShmem:\t 0 kB\nVmData:\t 67012 kB\nVmStk:\t 132 kB\nVmExe:\t 15232 kB\nVmLib:\t 8 kB\nVmPTE:\t 192 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t34\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t312\nnonvoluntary_ctxt_switches:\t8\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460699\nNgid:\t0\nPid:\t460699\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t460699\nNSpid:\t460699\nNSpgid:\t460699\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10688 kB\nVmRSS:\t 10440 kB\nRssAnon:\t 3388 kB\nRssFile:\t 7052 kB\nRssShmem:\t 0 kB\nVmData:\t 41016 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t6/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t10\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460724\nNgid:\t0\nPid:\t460724\nPPid:\t460699\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t460724\t1\nNSpid:\t460724\t1\nNSpgid:\t460724\t1\nNSsid:\t460724\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t33\nnonvoluntary_ctxt_switches:\t14\n", "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t461657\nNgid:\t0\nPid:\t461657\nPPid:\t460699\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t461657\t1\nNSpid:\t461657\t1\nNSpgid:\t461657\t1\nNSsid:\t461657\t1\nVmPeak:\t 748236 kB\nVmSize:\t 748236 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 38116 kB\nVmRSS:\t 38116 kB\nRssAnon:\t 10256 kB\nRssFile:\t 27860 kB\nRssShmem:\t 0 kB\nVmData:\t 108424 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 184 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t20\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t213\nnonvoluntary_ctxt_switches:\t15\n", "Name:\tkube-proxy\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t497\nNgid:\t559558\nPid:\t497\nPPid:\t394\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t497\t1\nNSpid:\t497\t1\nNSpgid:\t497\t1\nNSsid:\t497\t1\nVmPeak:\t 1296940 kB\nVmSize:\t 1296940 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 57204 kB\nVmRSS:\t 26228 kB\nRssAnon:\t 15088 kB\nRssFile:\t 11140 kB\nRssShmem:\t 0 kB\nVmData:\t 70032 kB\nVmStk:\t 132 kB\nVmExe:\t 29500 kB\nVmLib:\t 8 kB\nVmPTE:\t 276 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t32\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t17876\nnonvoluntary_ctxt_switches:\t64\n", "Name:\tkindnetd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t689\nNgid:\t0\nPid:\t689\nPPid:\t398\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t689\t1\nNSpid:\t689\t1\nNSpgid:\t689\t1\nNSsid:\t689\t1\nVmPeak:\t 1285960 kB\nVmSize:\t 1285960 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 50012 kB\nVmRSS:\t 23788 kB\nRssAnon:\t 13332 kB\nRssFile:\t 10456 kB\nRssShmem:\t 0 kB\nVmData:\t 64400 kB\nVmStk:\t 132 kB\nVmExe:\t 25108 kB\nVmLib:\t 8 kB\nVmPTE:\t 260 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t36\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba3a00\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80435fb\nCapEff:\t00000000a80435fb\nCapBnd:\t00000000a80435fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t537\nnonvoluntary_ctxt_switches:\t13\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t801\nNgid:\t0\nPid:\t801\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t801\nNSpid:\t801\nNSpgid:\t801\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10272 kB\nVmRSS:\t 9948 kB\nRssAnon:\t 3112 kB\nRssFile:\t 6836 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 116 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t7\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t826\nNgid:\t0\nPid:\t826\nPPid:\t801\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t826\t1\nNSpid:\t826\t1\nNSpgid:\t826\t1\nNSsid:\t826\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t26\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tsh\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t858\nNgid:\t0\nPid:\t858\nPPid:\t801\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t858\t1\nNSpid:\t858\t1\nNSpgid:\t858\t1\nNSsid:\t858\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1564 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 972 kB\nVmRSS:\t 84 kB\nRssAnon:\t 80 kB\nRssFile:\t 4 kB\nRssShmem:\t 0 kB\nVmData:\t 52 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 48 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000004\nSigCgt:\t0000000000010002\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t916\nnonvoluntary_ctxt_switches:\t7\n"] [2025-06-12 23:16:35] INFO -- CNTI-proctree_by_pid: proctree_by_pid potential_parent_pid: 461657 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: proc_statuses: ["Name:\tsystemd\nUmask:\t0000\nState:\tS (sleeping)\nTgid:\t1\nNgid:\t0\nPid:\t1\nPPid:\t0\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t1\nNSpid:\t1\nNSpgid:\t1\nNSsid:\t1\nVmPeak:\t 32188 kB\nVmSize:\t 31300 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 23396 kB\nVmRSS:\t 22548 kB\nRssAnon:\t 13964 kB\nRssFile:\t 8584 kB\nRssShmem:\t 0 kB\nVmData:\t 13292 kB\nVmStk:\t 132 kB\nVmExe:\t 40 kB\nVmLib:\t 10688 kB\nVmPTE:\t 96 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t7fe3c0fe28014a03\nSigIgn:\t0000000000001000\nSigCgt:\t00000000000004ec\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t113273\nnonvoluntary_ctxt_switches:\t5668\n", "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t1751\nNgid:\t0\nPid:\t1751\nPPid:\t858\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t1751\t888\nNSpid:\t1751\t888\nNSpgid:\t858\t1\nNSsid:\t858\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1532 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 20 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 48 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t1\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tsystemd-journal\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t180\nNgid:\t0\nPid:\t180\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t180\nNSpid:\t180\nNSpgid:\t180\nNSsid:\t180\nVmPeak:\t 147684 kB\nVmSize:\t 147684 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 113448 kB\nVmRSS:\t 113448 kB\nRssAnon:\t 1132 kB\nRssFile:\t 6836 kB\nRssShmem:\t 105480 kB\nVmData:\t 640 kB\nVmStk:\t 132 kB\nVmExe:\t 92 kB\nVmLib:\t 9736 kB\nVmPTE:\t 324 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000400004a02\nSigIgn:\t0000000000001000\nSigCgt:\t0000000000000040\nCapInh:\t0000000000000000\nCapPrm:\t00000025402800cf\nCapEff:\t00000025402800cf\nCapBnd:\t00000025402800cf\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t20\nSpeculation_Store_Bypass:\tthread force mitigated\nSpeculationIndirectBranch:\tconditional force disabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t88566\nnonvoluntary_ctxt_switches:\t239\n", "Name:\tcontainerd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t196\nNgid:\t0\nPid:\t196\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t1024\nGroups:\t0 \nNStgid:\t196\nNSpid:\t196\nNSpgid:\t196\nNSsid:\t196\nVmPeak:\t 7992632 kB\nVmSize:\t 7760548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 149992 kB\nVmRSS:\t 94980 kB\nRssAnon:\t 57388 kB\nRssFile:\t 37592 kB\nRssShmem:\t 0 kB\nVmData:\t 752892 kB\nVmStk:\t 132 kB\nVmExe:\t 18236 kB\nVmLib:\t 1524 kB\nVmPTE:\t 1160 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t65\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t124\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tkubelet\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t306\nNgid:\t558684\nPid:\t306\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t256\nGroups:\t0 \nNStgid:\t306\nNSpid:\t306\nNSpgid:\t306\nNSsid:\t306\nVmPeak:\t 7363536 kB\nVmSize:\t 7363536 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 108068 kB\nVmRSS:\t 100244 kB\nRssAnon:\t 61408 kB\nRssFile:\t 38836 kB\nRssShmem:\t 0 kB\nVmData:\t 812016 kB\nVmStk:\t 132 kB\nVmExe:\t 35360 kB\nVmLib:\t 1560 kB\nVmPTE:\t 1096 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t82\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t745048\nnonvoluntary_ctxt_switches:\t757\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t394\nNgid:\t0\nPid:\t394\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t394\nNSpid:\t394\nNSpgid:\t394\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10260 kB\nVmRSS:\t 10012 kB\nRssAnon:\t 3280 kB\nRssFile:\t 6732 kB\nRssShmem:\t 0 kB\nVmData:\t 41016 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t8\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t398\nNgid:\t0\nPid:\t398\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t398\nNSpid:\t398\nNSpgid:\t398\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11268 kB\nVmRSS:\t 10920 kB\nRssAnon:\t 3420 kB\nRssFile:\t 7500 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 104 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t9\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t442\nNgid:\t0\nPid:\t442\nPPid:\t394\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t442\t1\nNSpid:\t442\t1\nNSpgid:\t442\t1\nNSsid:\t442\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t28\nnonvoluntary_ctxt_switches:\t8\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t449\nNgid:\t0\nPid:\t449\nPPid:\t398\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t449\t1\nNSpid:\t449\t1\nNSpgid:\t449\t1\nNSsid:\t449\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t133\nnonvoluntary_ctxt_switches:\t10\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459066\nNgid:\t0\nPid:\t459066\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t459066\nNSpid:\t459066\nNSpgid:\t459066\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 11840 kB\nVmRSS:\t 11384 kB\nRssAnon:\t 3948 kB\nRssFile:\t 7436 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t38\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459091\nNgid:\t0\nPid:\t459091\nPPid:\t459066\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t459091\nNSpid:\t459091\nNSpgid:\t459091\nNSsid:\t459091\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t29\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tsleep\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459330\nNgid:\t0\nPid:\t459330\nPPid:\t459066\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t459330\nNSpid:\t459330\nNSpgid:\t459330\nNSsid:\t459330\nVmPeak:\t 2488 kB\nVmSize:\t 2488 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 924 kB\nVmRSS:\t 924 kB\nRssAnon:\t 88 kB\nRssFile:\t 836 kB\nRssShmem:\t 0 kB\nVmData:\t 224 kB\nVmStk:\t 132 kB\nVmExe:\t 20 kB\nVmLib:\t 1524 kB\nVmPTE:\t 44 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000000000\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t45\nnonvoluntary_ctxt_switches:\t8\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459856\nNgid:\t0\nPid:\t459856\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t459856\nNSpid:\t459856\nNSpgid:\t459856\nNSsid:\t196\nVmPeak:\t 1233548 kB\nVmSize:\t 1233548 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10468 kB\nVmRSS:\t 10420 kB\nRssAnon:\t 3316 kB\nRssFile:\t 7104 kB\nRssShmem:\t 0 kB\nVmData:\t 40760 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 112 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t11\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t9\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t459880\nNgid:\t0\nPid:\t459880\nPPid:\t459856\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t459880\t1\nNSpid:\t459880\t1\nNSpgid:\t459880\t1\nNSsid:\t459880\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t29\nnonvoluntary_ctxt_switches:\t8\n", "Name:\tchaos-operator\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460026\nNgid:\t0\nPid:\t460026\nPPid:\t459856\nTracerPid:\t0\nUid:\t1000\t1000\t1000\t1000\nGid:\t1000\t1000\t1000\t1000\nFDSize:\t64\nGroups:\t1000 \nNStgid:\t460026\t1\nNSpid:\t460026\t1\nNSpgid:\t460026\t1\nNSsid:\t460026\t1\nVmPeak:\t 1262188 kB\nVmSize:\t 1262188 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 37776 kB\nVmRSS:\t 37776 kB\nRssAnon:\t 14556 kB\nRssFile:\t 23220 kB\nRssShmem:\t 0 kB\nVmData:\t 67012 kB\nVmStk:\t 132 kB\nVmExe:\t 15232 kB\nVmLib:\t 8 kB\nVmPTE:\t 192 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t34\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t312\nnonvoluntary_ctxt_switches:\t8\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460699\nNgid:\t0\nPid:\t460699\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t460699\nNSpid:\t460699\nNSpgid:\t460699\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10688 kB\nVmRSS:\t 10440 kB\nRssAnon:\t 3388 kB\nRssFile:\t 7052 kB\nRssShmem:\t 0 kB\nVmData:\t 41016 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 108 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t6/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t10\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t460724\nNgid:\t0\nPid:\t460724\nPPid:\t460699\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t460724\t1\nNSpid:\t460724\t1\nNSpgid:\t460724\t1\nNSsid:\t460724\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t2\nSeccomp_filters:\t1\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t33\nnonvoluntary_ctxt_switches:\t14\n", "Name:\tcoredns\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t461657\nNgid:\t0\nPid:\t461657\nPPid:\t460699\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t461657\t1\nNSpid:\t461657\t1\nNSpgid:\t461657\t1\nNSsid:\t461657\t1\nVmPeak:\t 748236 kB\nVmSize:\t 748236 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 38116 kB\nVmRSS:\t 38116 kB\nRssAnon:\t 10256 kB\nRssFile:\t 27860 kB\nRssShmem:\t 0 kB\nVmData:\t 108424 kB\nVmStk:\t 132 kB\nVmExe:\t 22032 kB\nVmLib:\t 8 kB\nVmPTE:\t 184 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t20\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffe7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80425fb\nCapEff:\t00000000a80425fb\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t213\nnonvoluntary_ctxt_switches:\t15\n", "Name:\tkube-proxy\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t497\nNgid:\t559558\nPid:\t497\nPPid:\t394\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t497\t1\nNSpid:\t497\t1\nNSpgid:\t497\t1\nNSsid:\t497\t1\nVmPeak:\t 1296940 kB\nVmSize:\t 1296940 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 57204 kB\nVmRSS:\t 26228 kB\nRssAnon:\t 15088 kB\nRssFile:\t 11140 kB\nRssShmem:\t 0 kB\nVmData:\t 70032 kB\nVmStk:\t 132 kB\nVmExe:\t 29500 kB\nVmLib:\t 8 kB\nVmPTE:\t 276 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t32\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t17876\nnonvoluntary_ctxt_switches:\t64\n", "Name:\tkindnetd\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t689\nNgid:\t0\nPid:\t689\nPPid:\t398\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t689\t1\nNSpid:\t689\t1\nNSpgid:\t689\t1\nNSsid:\t689\t1\nVmPeak:\t 1285960 kB\nVmSize:\t 1285960 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 50012 kB\nVmRSS:\t 23788 kB\nRssAnon:\t 13332 kB\nRssFile:\t 10456 kB\nRssShmem:\t 0 kB\nVmData:\t 64400 kB\nVmStk:\t 132 kB\nVmExe:\t 25108 kB\nVmLib:\t 8 kB\nVmPTE:\t 260 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t36\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba3a00\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t00000000a80435fb\nCapEff:\t00000000a80435fb\nCapBnd:\t00000000a80435fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t537\nnonvoluntary_ctxt_switches:\t13\n", "Name:\tcontainerd-shim\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t801\nNgid:\t0\nPid:\t801\nPPid:\t1\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t801\nNSpid:\t801\nNSpgid:\t801\nNSsid:\t196\nVmPeak:\t 1233804 kB\nVmSize:\t 1233804 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 10272 kB\nVmRSS:\t 9948 kB\nRssAnon:\t 3112 kB\nRssFile:\t 6836 kB\nRssShmem:\t 0 kB\nVmData:\t 45112 kB\nVmStk:\t 132 kB\nVmExe:\t 3632 kB\nVmLib:\t 8 kB\nVmPTE:\t 116 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t12\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\tfffffffc3bba2800\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffd7fc1feff\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t7\nnonvoluntary_ctxt_switches:\t0\n", "Name:\tpause\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t826\nNgid:\t0\nPid:\t826\nPPid:\t801\nTracerPid:\t0\nUid:\t65535\t65535\t65535\t65535\nGid:\t65535\t65535\t65535\t65535\nFDSize:\t64\nGroups:\t65535 \nNStgid:\t826\t1\nNSpid:\t826\t1\nNSpgid:\t826\t1\nNSsid:\t826\t1\nVmPeak:\t 1020 kB\nVmSize:\t 1020 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 4 kB\nVmRSS:\t 4 kB\nRssAnon:\t 4 kB\nRssFile:\t 0 kB\nRssShmem:\t 0 kB\nVmData:\t 152 kB\nVmStk:\t 132 kB\nVmExe:\t 536 kB\nVmLib:\t 8 kB\nVmPTE:\t 28 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t0/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\t0000000000014002\nCapInh:\t0000000000000000\nCapPrm:\t0000000000000000\nCapEff:\t0000000000000000\nCapBnd:\t00000000a80425fb\nCapAmb:\t0000000000000000\nNoNewPrivs:\t1\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t26\nnonvoluntary_ctxt_switches:\t7\n", "Name:\tsh\nUmask:\t0022\nState:\tS (sleeping)\nTgid:\t858\nNgid:\t0\nPid:\t858\nPPid:\t801\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 1 2 3 4 6 10 11 20 26 27 \nNStgid:\t858\t1\nNSpid:\t858\t1\nNSpgid:\t858\t1\nNSsid:\t858\t1\nVmPeak:\t 3552 kB\nVmSize:\t 1564 kB\nVmLck:\t 0 kB\nVmPin:\t 0 kB\nVmHWM:\t 972 kB\nVmRSS:\t 84 kB\nRssAnon:\t 80 kB\nRssFile:\t 4 kB\nRssShmem:\t 0 kB\nVmData:\t 52 kB\nVmStk:\t 132 kB\nVmExe:\t 788 kB\nVmLib:\t 556 kB\nVmPTE:\t 48 kB\nVmSwap:\t 0 kB\nHugetlbPages:\t 0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t1\nSigQ:\t4/256660\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000004\nSigCgt:\t0000000000010002\nCapInh:\t0000000000000000\nCapPrm:\t000001ffffffffff\nCapEff:\t000001ffffffffff\nCapBnd:\t000001ffffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSeccomp_filters:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nSpeculationIndirectBranch:\tconditional enabled\nCpus_allowed:\tffffff,ffffffff,ffffffff\nCpus_allowed_list:\t0-87\nMems_allowed:\t00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003\nMems_allowed_list:\t0-1\nvoluntary_ctxt_switches:\t916\nnonvoluntary_ctxt_switches:\t7\n"] [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: systemd Umask: 0000 State: S (sleeping) Tgid: 1 Ngid: 0 Pid: 1 PPid: 0 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 256 Groups: 0 NStgid: 1 NSpid: 1 NSpgid: 1 NSsid: 1 VmPeak: 32188 kB VmSize: 31300 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 23396 kB VmRSS: 22548 kB RssAnon: 13964 kB RssFile: 8584 kB RssShmem: 0 kB VmData: 13292 kB VmStk: 132 kB VmExe: 40 kB VmLib: 10688 kB VmPTE: 96 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 7fe3c0fe28014a03 SigIgn: 0000000000001000 SigCgt: 00000000000004ec CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 113273 nonvoluntary_ctxt_switches: 5668 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "systemd", "Umask" => "0000", "State" => "S (sleeping)", "Tgid" => "1", "Ngid" => "0", "Pid" => "1", "PPid" => "0", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "256", "Groups" => "0", "NStgid" => "1", "NSpid" => "1", "NSpgid" => "1", "NSsid" => "1", "VmPeak" => "32188 kB", "VmSize" => "31300 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "23396 kB", "VmRSS" => "22548 kB", "RssAnon" => "13964 kB", "RssFile" => "8584 kB", "RssShmem" => "0 kB", "VmData" => "13292 kB", "VmStk" => "132 kB", "VmExe" => "40 kB", "VmLib" => "10688 kB", "VmPTE" => "96 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "7fe3c0fe28014a03", "SigIgn" => "0000000000001000", "SigCgt" => "00000000000004ec", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "113273", "nonvoluntary_ctxt_switches" => "5668"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: sleep Umask: 0022 State: S (sleeping) Tgid: 1751 Ngid: 0 Pid: 1751 PPid: 858 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 1 2 3 4 6 10 11 20 26 27 NStgid: 1751 888 NSpid: 1751 888 NSpgid: 858 1 NSsid: 858 1 VmPeak: 3552 kB VmSize: 1532 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 20 kB VmStk: 132 kB VmExe: 788 kB VmLib: 556 kB VmPTE: 48 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 1 nonvoluntary_ctxt_switches: 0 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sleep", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "1751", "Ngid" => "0", "Pid" => "1751", "PPid" => "858", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0 1 2 3 4 6 10 11 20 26 27", "NStgid" => "1751\t888", "NSpid" => "1751\t888", "NSpgid" => "858\t1", "NSsid" => "858\t1", "VmPeak" => "3552 kB", "VmSize" => "1532 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "20 kB", "VmStk" => "132 kB", "VmExe" => "788 kB", "VmLib" => "556 kB", "VmPTE" => "48 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "1", "nonvoluntary_ctxt_switches" => "0"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: systemd-journal Umask: 0022 State: S (sleeping) Tgid: 180 Ngid: 0 Pid: 180 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 180 NSpid: 180 NSpgid: 180 NSsid: 180 VmPeak: 147684 kB VmSize: 147684 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 113448 kB VmRSS: 113448 kB RssAnon: 1132 kB RssFile: 6836 kB RssShmem: 105480 kB VmData: 640 kB VmStk: 132 kB VmExe: 92 kB VmLib: 9736 kB VmPTE: 324 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000400004a02 SigIgn: 0000000000001000 SigCgt: 0000000000000040 CapInh: 0000000000000000 CapPrm: 00000025402800cf CapEff: 00000025402800cf CapBnd: 00000025402800cf CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 2 Seccomp_filters: 20 Speculation_Store_Bypass: thread force mitigated SpeculationIndirectBranch: conditional force disabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 88566 nonvoluntary_ctxt_switches: 239 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "systemd-journal", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "180", "Ngid" => "0", "Pid" => "180", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "180", "NSpid" => "180", "NSpgid" => "180", "NSsid" => "180", "VmPeak" => "147684 kB", "VmSize" => "147684 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "113448 kB", "VmRSS" => "113448 kB", "RssAnon" => "1132 kB", "RssFile" => "6836 kB", "RssShmem" => "105480 kB", "VmData" => "640 kB", "VmStk" => "132 kB", "VmExe" => "92 kB", "VmLib" => "9736 kB", "VmPTE" => "324 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000400004a02", "SigIgn" => "0000000000001000", "SigCgt" => "0000000000000040", "CapInh" => "0000000000000000", "CapPrm" => "00000025402800cf", "CapEff" => "00000025402800cf", "CapBnd" => "00000025402800cf", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "2", "Seccomp_filters" => "20", "Speculation_Store_Bypass" => "thread force mitigated", "SpeculationIndirectBranch" => "conditional force disabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "88566", "nonvoluntary_ctxt_switches" => "239"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: containerd Umask: 0022 State: S (sleeping) Tgid: 196 Ngid: 0 Pid: 196 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 1024 Groups: 0 NStgid: 196 NSpid: 196 NSpgid: 196 NSsid: 196 VmPeak: 7992632 kB VmSize: 7760548 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 149992 kB VmRSS: 94980 kB RssAnon: 57388 kB RssFile: 37592 kB RssShmem: 0 kB VmData: 752892 kB VmStk: 132 kB VmExe: 18236 kB VmLib: 1524 kB VmPTE: 1160 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 65 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 124 nonvoluntary_ctxt_switches: 0 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "196", "Ngid" => "0", "Pid" => "196", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "1024", "Groups" => "0", "NStgid" => "196", "NSpid" => "196", "NSpgid" => "196", "NSsid" => "196", "VmPeak" => "7992632 kB", "VmSize" => "7760548 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "149992 kB", "VmRSS" => "94980 kB", "RssAnon" => "57388 kB", "RssFile" => "37592 kB", "RssShmem" => "0 kB", "VmData" => "752892 kB", "VmStk" => "132 kB", "VmExe" => "18236 kB", "VmLib" => "1524 kB", "VmPTE" => "1160 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "65", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "124", "nonvoluntary_ctxt_switches" => "0"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: kubelet Umask: 0022 State: S (sleeping) Tgid: 306 Ngid: 558684 Pid: 306 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 256 Groups: 0 NStgid: 306 NSpid: 306 NSpgid: 306 NSsid: 306 VmPeak: 7363536 kB VmSize: 7363536 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 108068 kB VmRSS: 100244 kB RssAnon: 61408 kB RssFile: 38836 kB RssShmem: 0 kB VmData: 812016 kB VmStk: 132 kB VmExe: 35360 kB VmLib: 1560 kB VmPTE: 1096 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 82 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 745048 nonvoluntary_ctxt_switches: 757 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "kubelet", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "306", "Ngid" => "558684", "Pid" => "306", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "256", "Groups" => "0", "NStgid" => "306", "NSpid" => "306", "NSpgid" => "306", "NSsid" => "306", "VmPeak" => "7363536 kB", "VmSize" => "7363536 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "108068 kB", "VmRSS" => "100244 kB", "RssAnon" => "61408 kB", "RssFile" => "38836 kB", "RssShmem" => "0 kB", "VmData" => "812016 kB", "VmStk" => "132 kB", "VmExe" => "35360 kB", "VmLib" => "1560 kB", "VmPTE" => "1096 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "82", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "745048", "nonvoluntary_ctxt_switches" => "757"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 394 Ngid: 0 Pid: 394 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 394 NSpid: 394 NSpgid: 394 NSsid: 196 VmPeak: 1233804 kB VmSize: 1233804 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 10260 kB VmRSS: 10012 kB RssAnon: 3280 kB RssFile: 6732 kB RssShmem: 0 kB VmData: 41016 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 108 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 12 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 8 nonvoluntary_ctxt_switches: 0 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "394", "Ngid" => "0", "Pid" => "394", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "394", "NSpid" => "394", "NSpgid" => "394", "NSsid" => "196", "VmPeak" => "1233804 kB", "VmSize" => "1233804 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "10260 kB", "VmRSS" => "10012 kB", "RssAnon" => "3280 kB", "RssFile" => "6732 kB", "RssShmem" => "0 kB", "VmData" => "41016 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "108 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "12", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "8", "nonvoluntary_ctxt_switches" => "0"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 398 Ngid: 0 Pid: 398 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 398 NSpid: 398 NSpgid: 398 NSsid: 196 VmPeak: 1233548 kB VmSize: 1233548 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 11268 kB VmRSS: 10920 kB RssAnon: 3420 kB RssFile: 7500 kB RssShmem: 0 kB VmData: 40760 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 104 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 12 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 9 nonvoluntary_ctxt_switches: 0 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "398", "Ngid" => "0", "Pid" => "398", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "398", "NSpid" => "398", "NSpgid" => "398", "NSsid" => "196", "VmPeak" => "1233548 kB", "VmSize" => "1233548 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "11268 kB", "VmRSS" => "10920 kB", "RssAnon" => "3420 kB", "RssFile" => "7500 kB", "RssShmem" => "0 kB", "VmData" => "40760 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "104 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "12", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "9", "nonvoluntary_ctxt_switches" => "0"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 442 Ngid: 0 Pid: 442 PPid: 394 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 442 1 NSpid: 442 1 NSpgid: 442 1 NSsid: 442 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 28 nonvoluntary_ctxt_switches: 8 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "442", "Ngid" => "0", "Pid" => "442", "PPid" => "394", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "442\t1", "NSpid" => "442\t1", "NSpgid" => "442\t1", "NSsid" => "442\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "28", "nonvoluntary_ctxt_switches" => "8"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 449 Ngid: 0 Pid: 449 PPid: 398 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 449 1 NSpid: 449 1 NSpgid: 449 1 NSsid: 449 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 2 Seccomp_filters: 1 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 133 nonvoluntary_ctxt_switches: 10 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "449", "Ngid" => "0", "Pid" => "449", "PPid" => "398", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "449\t1", "NSpid" => "449\t1", "NSpgid" => "449\t1", "NSsid" => "449\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "2", "Seccomp_filters" => "1", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "133", "nonvoluntary_ctxt_switches" => "10"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 459066 Ngid: 0 Pid: 459066 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 459066 NSpid: 459066 NSpgid: 459066 NSsid: 196 VmPeak: 1233804 kB VmSize: 1233804 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 11840 kB VmRSS: 11384 kB RssAnon: 3948 kB RssFile: 7436 kB RssShmem: 0 kB VmData: 45112 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 108 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 12 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 38 nonvoluntary_ctxt_switches: 0 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "459066", "Ngid" => "0", "Pid" => "459066", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "459066", "NSpid" => "459066", "NSpgid" => "459066", "NSsid" => "196", "VmPeak" => "1233804 kB", "VmSize" => "1233804 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "11840 kB", "VmRSS" => "11384 kB", "RssAnon" => "3948 kB", "RssFile" => "7436 kB", "RssShmem" => "0 kB", "VmData" => "45112 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "108 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "12", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "38", "nonvoluntary_ctxt_switches" => "0"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 459091 Ngid: 0 Pid: 459091 PPid: 459066 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 459091 NSpid: 459091 NSpgid: 459091 NSsid: 459091 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 29 nonvoluntary_ctxt_switches: 7 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "459091", "Ngid" => "0", "Pid" => "459091", "PPid" => "459066", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "459091", "NSpid" => "459091", "NSpgid" => "459091", "NSsid" => "459091", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "29", "nonvoluntary_ctxt_switches" => "7"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: sleep Umask: 0022 State: S (sleeping) Tgid: 459330 Ngid: 0 Pid: 459330 PPid: 459066 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 459330 NSpid: 459330 NSpgid: 459330 NSsid: 459330 VmPeak: 2488 kB VmSize: 2488 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 924 kB VmRSS: 924 kB RssAnon: 88 kB RssFile: 836 kB RssShmem: 0 kB VmData: 224 kB VmStk: 132 kB VmExe: 20 kB VmLib: 1524 kB VmPTE: 44 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000000000 CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 45 nonvoluntary_ctxt_switches: 8 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sleep", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "459330", "Ngid" => "0", "Pid" => "459330", "PPid" => "459066", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "459330", "NSpid" => "459330", "NSpgid" => "459330", "NSsid" => "459330", "VmPeak" => "2488 kB", "VmSize" => "2488 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "924 kB", "VmRSS" => "924 kB", "RssAnon" => "88 kB", "RssFile" => "836 kB", "RssShmem" => "0 kB", "VmData" => "224 kB", "VmStk" => "132 kB", "VmExe" => "20 kB", "VmLib" => "1524 kB", "VmPTE" => "44 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000000000", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "45", "nonvoluntary_ctxt_switches" => "8"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 459856 Ngid: 0 Pid: 459856 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 459856 NSpid: 459856 NSpgid: 459856 NSsid: 196 VmPeak: 1233548 kB VmSize: 1233548 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 10468 kB VmRSS: 10420 kB RssAnon: 3316 kB RssFile: 7104 kB RssShmem: 0 kB VmData: 40760 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 112 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 11 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 9 nonvoluntary_ctxt_switches: 0 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "459856", "Ngid" => "0", "Pid" => "459856", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "459856", "NSpid" => "459856", "NSpgid" => "459856", "NSsid" => "196", "VmPeak" => "1233548 kB", "VmSize" => "1233548 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "10468 kB", "VmRSS" => "10420 kB", "RssAnon" => "3316 kB", "RssFile" => "7104 kB", "RssShmem" => "0 kB", "VmData" => "40760 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "112 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "11", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "9", "nonvoluntary_ctxt_switches" => "0"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 459880 Ngid: 0 Pid: 459880 PPid: 459856 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 459880 1 NSpid: 459880 1 NSpgid: 459880 1 NSsid: 459880 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 2 Seccomp_filters: 1 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 29 nonvoluntary_ctxt_switches: 8 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "459880", "Ngid" => "0", "Pid" => "459880", "PPid" => "459856", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "459880\t1", "NSpid" => "459880\t1", "NSpgid" => "459880\t1", "NSsid" => "459880\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "2", "Seccomp_filters" => "1", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "29", "nonvoluntary_ctxt_switches" => "8"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: chaos-operator Umask: 0022 State: S (sleeping) Tgid: 460026 Ngid: 0 Pid: 460026 PPid: 459856 TracerPid: 0 Uid: 1000 1000 1000 1000 Gid: 1000 1000 1000 1000 FDSize: 64 Groups: 1000 NStgid: 460026 1 NSpid: 460026 1 NSpgid: 460026 1 NSsid: 460026 1 VmPeak: 1262188 kB VmSize: 1262188 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 37776 kB VmRSS: 37776 kB RssAnon: 14556 kB RssFile: 23220 kB RssShmem: 0 kB VmData: 67012 kB VmStk: 132 kB VmExe: 15232 kB VmLib: 8 kB VmPTE: 192 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 34 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 312 nonvoluntary_ctxt_switches: 8 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "chaos-operator", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "460026", "Ngid" => "0", "Pid" => "460026", "PPid" => "459856", "TracerPid" => "0", "Uid" => "1000\t1000\t1000\t1000", "Gid" => "1000\t1000\t1000\t1000", "FDSize" => "64", "Groups" => "1000", "NStgid" => "460026\t1", "NSpid" => "460026\t1", "NSpgid" => "460026\t1", "NSsid" => "460026\t1", "VmPeak" => "1262188 kB", "VmSize" => "1262188 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "37776 kB", "VmRSS" => "37776 kB", "RssAnon" => "14556 kB", "RssFile" => "23220 kB", "RssShmem" => "0 kB", "VmData" => "67012 kB", "VmStk" => "132 kB", "VmExe" => "15232 kB", "VmLib" => "8 kB", "VmPTE" => "192 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "34", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "312", "nonvoluntary_ctxt_switches" => "8"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 460699 Ngid: 0 Pid: 460699 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 460699 NSpid: 460699 NSpgid: 460699 NSsid: 196 VmPeak: 1233804 kB VmSize: 1233804 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 10688 kB VmRSS: 10440 kB RssAnon: 3388 kB RssFile: 7052 kB RssShmem: 0 kB VmData: 41016 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 108 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 12 SigQ: 6/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 10 nonvoluntary_ctxt_switches: 0 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "460699", "Ngid" => "0", "Pid" => "460699", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "460699", "NSpid" => "460699", "NSpgid" => "460699", "NSsid" => "196", "VmPeak" => "1233804 kB", "VmSize" => "1233804 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "10688 kB", "VmRSS" => "10440 kB", "RssAnon" => "3388 kB", "RssFile" => "7052 kB", "RssShmem" => "0 kB", "VmData" => "41016 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "108 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "12", "SigQ" => "6/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "10", "nonvoluntary_ctxt_switches" => "0"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 460724 Ngid: 0 Pid: 460724 PPid: 460699 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 460724 1 NSpid: 460724 1 NSpgid: 460724 1 NSsid: 460724 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 2 Seccomp_filters: 1 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 33 nonvoluntary_ctxt_switches: 14 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "460724", "Ngid" => "0", "Pid" => "460724", "PPid" => "460699", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "460724\t1", "NSpid" => "460724\t1", "NSpgid" => "460724\t1", "NSsid" => "460724\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "2", "Seccomp_filters" => "1", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "33", "nonvoluntary_ctxt_switches" => "14"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] DEBUG -- CNTI: parse_status status_output: Name: coredns Umask: 0022 State: S (sleeping) Tgid: 461657 Ngid: 0 Pid: 461657 PPid: 460699 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 461657 1 NSpid: 461657 1 NSpgid: 461657 1 NSsid: 461657 1 VmPeak: 748236 kB VmSize: 748236 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 38116 kB VmRSS: 38116 kB RssAnon: 10256 kB RssFile: 27860 kB RssShmem: 0 kB VmData: 108424 kB VmStk: 132 kB VmExe: 22032 kB VmLib: 8 kB VmPTE: 184 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 20 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffe7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80425fb CapEff: 00000000a80425fb CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 213 nonvoluntary_ctxt_switches: 15 [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "461657", "Ngid" => "0", "Pid" => "461657", "PPid" => "460699", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "461657\t1", "NSpid" => "461657\t1", "NSpgid" => "461657\t1", "NSsid" => "461657\t1", "VmPeak" => "748236 kB", "VmSize" => "748236 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "38116 kB", "VmRSS" => "38116 kB", "RssAnon" => "10256 kB", "RssFile" => "27860 kB", "RssShmem" => "0 kB", "VmData" => "108424 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "184 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "20", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "213", "nonvoluntary_ctxt_switches" => "15"} [2025-06-12 23:16:35] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:35] INFO -- CNTI: cmdline_by_pid [2025-06-12 23:16:35] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:35] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:35] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:35] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:35] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:35] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:36] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-06-12 23:16:36] INFO -- CNTI: cmdline_by_node cmdline: {status: Process::Status[0], output: "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000", error: ""} [2025-06-12 23:16:36] DEBUG -- CNTI-proctree_by_pid: current_pid == potential_parent_pid [2025-06-12 23:16:36] DEBUG -- CNTI: parse_status status_output: Name: kube-proxy Umask: 0022 State: S (sleeping) Tgid: 497 Ngid: 559558 Pid: 497 PPid: 394 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 497 1 NSpid: 497 1 NSpgid: 497 1 NSsid: 497 1 VmPeak: 1296940 kB VmSize: 1296940 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 57204 kB VmRSS: 26228 kB RssAnon: 15088 kB RssFile: 11140 kB RssShmem: 0 kB VmData: 70032 kB VmStk: 132 kB VmExe: 29500 kB VmLib: 8 kB VmPTE: 276 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 32 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 17876 nonvoluntary_ctxt_switches: 64 [2025-06-12 23:16:36] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "kube-proxy", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "497", "Ngid" => "559558", "Pid" => "497", "PPid" => "394", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "497\t1", "NSpid" => "497\t1", "NSpgid" => "497\t1", "NSsid" => "497\t1", "VmPeak" => "1296940 kB", "VmSize" => "1296940 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "57204 kB", "VmRSS" => "26228 kB", "RssAnon" => "15088 kB", "RssFile" => "11140 kB", "RssShmem" => "0 kB", "VmData" => "70032 kB", "VmStk" => "132 kB", "VmExe" => "29500 kB", "VmLib" => "8 kB", "VmPTE" => "276 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "32", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "17876", "nonvoluntary_ctxt_switches" => "64"} [2025-06-12 23:16:36] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:36] DEBUG -- CNTI: parse_status status_output: Name: kindnetd Umask: 0022 State: S (sleeping) Tgid: 689 Ngid: 0 Pid: 689 PPid: 398 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 689 1 NSpid: 689 1 NSpgid: 689 1 NSsid: 689 1 VmPeak: 1285960 kB VmSize: 1285960 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 50012 kB VmRSS: 23788 kB RssAnon: 13332 kB RssFile: 10456 kB RssShmem: 0 kB VmData: 64400 kB VmStk: 132 kB VmExe: 25108 kB VmLib: 8 kB VmPTE: 260 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 36 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba3a00 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 00000000a80435fb CapEff: 00000000a80435fb CapBnd: 00000000a80435fb CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 537 nonvoluntary_ctxt_switches: 13 [2025-06-12 23:16:36] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "kindnetd", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "689", "Ngid" => "0", "Pid" => "689", "PPid" => "398", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "689\t1", "NSpid" => "689\t1", "NSpgid" => "689\t1", "NSsid" => "689\t1", "VmPeak" => "1285960 kB", "VmSize" => "1285960 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "50012 kB", "VmRSS" => "23788 kB", "RssAnon" => "13332 kB", "RssFile" => "10456 kB", "RssShmem" => "0 kB", "VmData" => "64400 kB", "VmStk" => "132 kB", "VmExe" => "25108 kB", "VmLib" => "8 kB", "VmPTE" => "260 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "36", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba3a00", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80435fb", "CapEff" => "00000000a80435fb", "CapBnd" => "00000000a80435fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "537", "nonvoluntary_ctxt_switches" => "13"} [2025-06-12 23:16:36] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:36] DEBUG -- CNTI: parse_status status_output: Name: containerd-shim Umask: 0022 State: S (sleeping) Tgid: 801 Ngid: 0 Pid: 801 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 NStgid: 801 NSpid: 801 NSpgid: 801 NSsid: 196 VmPeak: 1233804 kB VmSize: 1233804 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 10272 kB VmRSS: 9948 kB RssAnon: 3112 kB RssFile: 6836 kB RssShmem: 0 kB VmData: 45112 kB VmStk: 132 kB VmExe: 3632 kB VmLib: 8 kB VmPTE: 116 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 12 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: fffffffc3bba2800 SigIgn: 0000000000000000 SigCgt: fffffffd7fc1feff CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 7 nonvoluntary_ctxt_switches: 0 [2025-06-12 23:16:36] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "containerd-shim", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "801", "Ngid" => "0", "Pid" => "801", "PPid" => "1", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "801", "NSpid" => "801", "NSpgid" => "801", "NSsid" => "196", "VmPeak" => "1233804 kB", "VmSize" => "1233804 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "10272 kB", "VmRSS" => "9948 kB", "RssAnon" => "3112 kB", "RssFile" => "6836 kB", "RssShmem" => "0 kB", "VmData" => "45112 kB", "VmStk" => "132 kB", "VmExe" => "3632 kB", "VmLib" => "8 kB", "VmPTE" => "116 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "12", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "fffffffc3bba2800", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffd7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "7", "nonvoluntary_ctxt_switches" => "0"} [2025-06-12 23:16:36] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:36] DEBUG -- CNTI: parse_status status_output: Name: pause Umask: 0022 State: S (sleeping) Tgid: 826 Ngid: 0 Pid: 826 PPid: 801 TracerPid: 0 Uid: 65535 65535 65535 65535 Gid: 65535 65535 65535 65535 FDSize: 64 Groups: 65535 NStgid: 826 1 NSpid: 826 1 NSpgid: 826 1 NSsid: 826 1 VmPeak: 1020 kB VmSize: 1020 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 4 kB VmRSS: 4 kB RssAnon: 4 kB RssFile: 0 kB RssShmem: 0 kB VmData: 152 kB VmStk: 132 kB VmExe: 536 kB VmLib: 8 kB VmPTE: 28 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 0/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000000 SigCgt: 0000000000014002 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 00000000a80425fb CapAmb: 0000000000000000 NoNewPrivs: 1 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 26 nonvoluntary_ctxt_switches: 7 [2025-06-12 23:16:36] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "pause", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "826", "Ngid" => "0", "Pid" => "826", "PPid" => "801", "TracerPid" => "0", "Uid" => "65535\t65535\t65535\t65535", "Gid" => "65535\t65535\t65535\t65535", "FDSize" => "64", "Groups" => "65535", "NStgid" => "826\t1", "NSpid" => "826\t1", "NSpgid" => "826\t1", "NSsid" => "826\t1", "VmPeak" => "1020 kB", "VmSize" => "1020 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "4 kB", "VmRSS" => "4 kB", "RssAnon" => "4 kB", "RssFile" => "0 kB", "RssShmem" => "0 kB", "VmData" => "152 kB", "VmStk" => "132 kB", "VmExe" => "536 kB", "VmLib" => "8 kB", "VmPTE" => "28 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "0/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "0000000000014002", "CapInh" => "0000000000000000", "CapPrm" => "0000000000000000", "CapEff" => "0000000000000000", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "1", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "26", "nonvoluntary_ctxt_switches" => "7"} [2025-06-12 23:16:36] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:36] DEBUG -- CNTI: parse_status status_output: Name: sh Umask: 0022 State: S (sleeping) Tgid: 858 Ngid: 0 Pid: 858 PPid: 801 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 64 Groups: 0 1 2 3 4 6 10 11 20 26 27 NStgid: 858 1 NSpid: 858 1 NSpgid: 858 1 NSsid: 858 1 VmPeak: 3552 kB VmSize: 1564 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 972 kB VmRSS: 84 kB RssAnon: 80 kB RssFile: 4 kB RssShmem: 0 kB VmData: 52 kB VmStk: 132 kB VmExe: 788 kB VmLib: 556 kB VmPTE: 48 kB VmSwap: 0 kB HugetlbPages: 0 kB CoreDumping: 0 THP_enabled: 1 Threads: 1 SigQ: 4/256660 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000000004 SigCgt: 0000000000010002 CapInh: 0000000000000000 CapPrm: 000001ffffffffff CapEff: 000001ffffffffff CapBnd: 000001ffffffffff CapAmb: 0000000000000000 NoNewPrivs: 0 Seccomp: 0 Seccomp_filters: 0 Speculation_Store_Bypass: thread vulnerable SpeculationIndirectBranch: conditional enabled Cpus_allowed: ffffff,ffffffff,ffffffff Cpus_allowed_list: 0-87 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 voluntary_ctxt_switches: 916 nonvoluntary_ctxt_switches: 7 [2025-06-12 23:16:36] DEBUG -- CNTI-proctree_by_pid: parsed_status: {"Name" => "sh", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "858", "Ngid" => "0", "Pid" => "858", "PPid" => "801", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0 1 2 3 4 6 10 11 20 26 27", "NStgid" => "858\t1", "NSpid" => "858\t1", "NSpgid" => "858\t1", "NSsid" => "858\t1", "VmPeak" => "3552 kB", "VmSize" => "1564 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "972 kB", "VmRSS" => "84 kB", "RssAnon" => "80 kB", "RssFile" => "4 kB", "RssShmem" => "0 kB", "VmData" => "52 kB", "VmStk" => "132 kB", "VmExe" => "788 kB", "VmLib" => "556 kB", "VmPTE" => "48 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "1", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000004", "SigCgt" => "0000000000010002", "CapInh" => "0000000000000000", "CapPrm" => "000001ffffffffff", "CapEff" => "000001ffffffffff", "CapBnd" => "000001ffffffffff", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "916", "nonvoluntary_ctxt_switches" => "7"} [2025-06-12 23:16:36] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:36] DEBUG -- CNTI-proctree_by_pid: proctree: [{"Name" => "coredns", "Umask" => "0022", "State" => "S (sleeping)", "Tgid" => "461657", "Ngid" => "0", "Pid" => "461657", "PPid" => "460699", "TracerPid" => "0", "Uid" => "0\t0\t0\t0", "Gid" => "0\t0\t0\t0", "FDSize" => "64", "Groups" => "0", "NStgid" => "461657\t1", "NSpid" => "461657\t1", "NSpgid" => "461657\t1", "NSsid" => "461657\t1", "VmPeak" => "748236 kB", "VmSize" => "748236 kB", "VmLck" => "0 kB", "VmPin" => "0 kB", "VmHWM" => "38116 kB", "VmRSS" => "38116 kB", "RssAnon" => "10256 kB", "RssFile" => "27860 kB", "RssShmem" => "0 kB", "VmData" => "108424 kB", "VmStk" => "132 kB", "VmExe" => "22032 kB", "VmLib" => "8 kB", "VmPTE" => "184 kB", "VmSwap" => "0 kB", "HugetlbPages" => "0 kB", "CoreDumping" => "0", "THP_enabled" => "1", "Threads" => "20", "SigQ" => "4/256660", "SigPnd" => "0000000000000000", "ShdPnd" => "0000000000000000", "SigBlk" => "0000000000000000", "SigIgn" => "0000000000000000", "SigCgt" => "fffffffe7fc1feff", "CapInh" => "0000000000000000", "CapPrm" => "00000000a80425fb", "CapEff" => "00000000a80425fb", "CapBnd" => "00000000a80425fb", "CapAmb" => "0000000000000000", "NoNewPrivs" => "0", "Seccomp" => "0", "Seccomp_filters" => "0", "Speculation_Store_Bypass" => "thread vulnerable", "SpeculationIndirectBranch" => "conditional enabled", "Cpus_allowed" => "ffffff,ffffffff,ffffffff", "Cpus_allowed_list" => "0-87", "Mems_allowed" => "00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003", "Mems_allowed_list" => "0-1", "voluntary_ctxt_switches" => "213", "nonvoluntary_ctxt_switches" => "15", "cmdline" => "/coredns\u0000-conf\u0000/etc/coredns/Corefile\u0000"}] [2025-06-12 23:16:36] DEBUG -- CNTI-proctree_by_pid: [2025-06-12 23:16:36] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:36] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:36] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:36] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:36] INFO -- CNTI-KubectlClient.Utils.exec_bg: Exec background command in pod cluster-tools-m6zbj [2025-06-12 23:16:36] DEBUG -- CNTI: ClusterTools exec: {process: #), @wait_count=2, @channel=#>, output: "", error: ""} [2025-06-12 23:16:37] DEBUG -- CNTI: Time left: 9 seconds [2025-06-12 23:16:37] INFO -- CNTI-sig_term_handled: Attached strace to PIDs: 461657 [2025-06-12 23:16:40] INFO -- CNTI: exec_by_node: Called with JSON [2025-06-12 23:16:40] DEBUG -- CNTI-KubectlClient.Get.pods_by_nodes: Creating list of pods found on nodes [2025-06-12 23:16:40] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource pods [2025-06-12 23:16:40] INFO -- CNTI-KubectlClient.Get.pods_by_nodes: Found 1 pods: cluster-tools-m6zbj [2025-06-12 23:16:40] DEBUG -- CNTI: cluster_tools_pod_name: cluster-tools-m6zbj [2025-06-12 23:16:40] INFO -- CNTI-KubectlClient.Utils.exec: Exec command in pod cluster-tools-m6zbj [2025-06-12 23:16:45] DEBUG -- CNTI: ClusterTools exec: {status: Process::Status[0], output: "", error: ""} ✔️ 🏆PASSED: [sig_term_handled] Sig Term handled ⚖👀 Microservice results: 2 of 4 tests passed  Reliability, Resilience, and Availability Tests [2025-06-12 23:16:48] INFO -- CNTI-sig_term_handled: PID 461657 => SIGTERM captured? true [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'sig_term_handled' emoji: ⚖👀 [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'sig_term_handled' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points: Task: 'sig_term_handled' type: essential [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'sig_term_handled' tags: ["microservice", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points: Task: 'sig_term_handled' type: essential [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.upsert_task-sig_term_handled: Task start time: 2025-06-12 23:16:21 UTC, end time: 2025-06-12 23:16:48 UTC [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.upsert_task-sig_term_handled: Task: 'sig_term_handled' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:26.883626118 [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled"] for tags: ["microservice", "cert"] [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["microservice", "cert"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 400, max tasks passed: 4 for tags: ["microservice", "cert"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled"] for tags: ["microservice", "cert"] [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["microservice", "cert"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["reasonable_image_size", "specialized_init_system", "reasonable_startup_time", "single_process_type", "zombie_handled", "service_discovery", "shared_database", "sig_term_handled"] for tag: microservice [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 400, max tasks passed: 4 for tags: ["microservice", "cert"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1300, total tasks passed: 13 for tags: ["essential"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-06-12 23:16:48] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 100, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:16:48] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:16:48] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:16:48] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 400} [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:16:48] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:16:48] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:16:48] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:16:48] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:16:48] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [liveness] [2025-06-12 23:16:48] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.Task.task_runner.liveness: Starting test [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:16:48] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:16:48] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:16:48] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-06-12 23:16:48] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-06-12 23:16:48] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:16:48] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-06-12 23:16:48] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns ✔️ 🏆PASSED: [liveness] Helm liveness probe found ⎈🧫 [2025-06-12 23:16:49] INFO -- CNTI-liveness: Resource Deployment/coredns-coredns passed liveness?: true [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.workload_resource_test: Container result: true [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-06-12 23:16:49] INFO -- CNTI-liveness: Workload resource task response: true [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'liveness' emoji: ⎈🧫 [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'liveness' tags: ["resilience", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points: Task: 'liveness' type: essential [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'liveness' tags: ["resilience", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points: Task: 'liveness' type: essential [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.upsert_task-liveness: Task start time: 2025-06-12 23:16:48 UTC, end time: 2025-06-12 23:16:49 UTC [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.upsert_task-liveness: Task: 'liveness' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.239961017 [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:16:49] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" [2025-06-12 23:16:49] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Task.ensure_cnf_installed!: Is CNF installed: true [2025-06-12 23:16:49] INFO -- CNTI: check_cnf_config args: # [2025-06-12 23:16:49] INFO -- CNTI: check_cnf_config cnf: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.cnf_config_list: Retrieve CNF config file [2025-06-12 23:16:49] DEBUG -- CNTI: find command: find installed_cnf_files/* -name "cnf-testsuite.yml" 🎬 Testing: [readiness] [2025-06-12 23:16:49] DEBUG -- CNTI: find output: installed_cnf_files/cnf-testsuite.yml [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.cnf_config_list: Found CNF config file: ["installed_cnf_files/cnf-testsuite.yml"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Task.task_runner: Run task with args # "installed_cnf_files/cnf-testsuite.yml"}> [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Task.task_runner.readiness: Starting test [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.workload_resource_test: Start resources test [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.cnf_workload_resources: Map block to CNF workload resources [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.cnf_resource_ymls: Load YAMLs from manifest: installed_cnf_files/common_manifest.yml [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Deployment [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Service [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: Pod [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ReplicaSet [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: StatefulSet [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: DaemonSet [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: kind: ServiceAccount [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_by_kind: ymls: [{"apiVersion" => "v1", "kind" => "ConfigMap", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "data" => {"Corefile" => ".:53 {\n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus 0.0.0.0:9153\n forward . /etc/resolv.conf\n cache 30\n loop\n reload\n loadbalance\n}"}}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRole", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "rules" => [{"apiGroups" => [""], "resources" => ["endpoints", "services", "pods", "namespaces"], "verbs" => ["list", "watch"]}]}, {"apiVersion" => "rbac.authorization.k8s.io/v1", "kind" => "ClusterRoleBinding", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}}, "roleRef" => {"apiGroup" => "rbac.authorization.k8s.io", "kind" => "ClusterRole", "name" => "coredns-coredns"}, "subjects" => [{"kind" => "ServiceAccount", "name" => "default", "namespace" => "cnf-default"}]}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}, {"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}] [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.all_workload_resources: [{"apiVersion" => "apps/v1", "kind" => "Deployment", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "namespace" => "cnf-default"}, "spec" => {"replicas" => 1, "strategy" => {"type" => "RollingUpdate", "rollingUpdate" => {"maxUnavailable" => 1, "maxSurge" => "25%"}}, "selector" => {"matchLabels" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}}, "template" => {"metadata" => {"labels" => {"k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns", "app.kubernetes.io/instance" => "coredns"}, "annotations" => {"checksum/config" => "473c46ef33ae3e2811a84fd13c39de8f09ee48ee3de2e6a155431c511d7838f9", "scheduler.alpha.kubernetes.io/critical-pod" => "", "scheduler.alpha.kubernetes.io/tolerations" => "[{\"key\":\"CriticalAddonsOnly\", \"operator\":\"Exists\"}]"}}, "spec" => {"terminationGracePeriodSeconds" => 30, "serviceAccountName" => "default", "dnsPolicy" => "Default", "containers" => [{"name" => "coredns", "image" => "coredns/coredns:1.7.1", "imagePullPolicy" => "IfNotPresent", "args" => ["-conf", "/etc/coredns/Corefile"], "volumeMounts" => [{"name" => "config-volume", "mountPath" => "/etc/coredns"}], "resources" => {"limits" => {"cpu" => "100m", "memory" => "128Mi"}, "requests" => {"cpu" => "100m", "memory" => "128Mi"}}, "ports" => [{"containerPort" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"containerPort" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "livenessProbe" => {"httpGet" => {"path" => "/health", "port" => 8080, "scheme" => "HTTP"}, "initialDelaySeconds" => 60, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}, "readinessProbe" => {"httpGet" => {"path" => "/ready", "port" => 8181, "scheme" => "HTTP"}, "initialDelaySeconds" => 10, "timeoutSeconds" => 5, "successThreshold" => 1, "failureThreshold" => 5}}], "volumes" => [{"name" => "config-volume", "configMap" => {"name" => "coredns-coredns", "items" => [{"key" => "Corefile", "path" => "Corefile"}]}}]}}}}, {"apiVersion" => "v1", "kind" => "Service", "metadata" => {"name" => "coredns-coredns", "labels" => {"app.kubernetes.io/managed-by" => "Helm", "app.kubernetes.io/instance" => "coredns", "helm.sh/chart" => "coredns-1.13.8", "k8s-app" => "coredns", "kubernetes.io/cluster-service" => "true", "kubernetes.io/name" => "CoreDNS", "app.kubernetes.io/name" => "coredns"}, "annotations" => {}, "namespace" => "cnf-default"}, "spec" => {"selector" => {"app.kubernetes.io/instance" => "coredns", "k8s-app" => "coredns", "app.kubernetes.io/name" => "coredns"}, "ports" => [{"port" => 53, "protocol" => "UDP", "name" => "udp-53"}, {"port" => 53, "protocol" => "TCP", "name" => "tcp-53"}], "type" => "ClusterIP"}}] [2025-06-12 23:16:49] DEBUG -- CNTI-Helm.workload_resource_kind_names: resource names: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.workload_resource_test: Found 2 resources to test: [{kind: "Deployment", name: "coredns-coredns", namespace: "cnf-default"}, {kind: "Service", name: "coredns-coredns", namespace: "cnf-default"}] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.workload_resource_test: Testing Deployment/coredns-coredns [2025-06-12 23:16:49] DEBUG -- CNTI-KubectlClient.Get.resource_volumes: Get volumes of Deployment/coredns-coredns [2025-06-12 23:16:49] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns [2025-06-12 23:16:49] DEBUG -- CNTI-KubectlClient.Get.resource_containers: Get containers of Deployment/coredns-coredns [2025-06-12 23:16:49] DEBUG -- CNTI-KubectlClient.Get.resource: Get resource Deployment/coredns-coredns ✔️ 🏆PASSED: [readiness] Helm readiness probe found ⎈🧫 Reliability, resilience, and availability results: 2 of 2 tests passed  RESULTS SUMMARY  - 15 of 18 total tests passed  - 15 of 18 essential tests passed Results have been saved to results/cnf-testsuite-results-20250612-231205-834.yml [2025-06-12 23:16:49] DEBUG -- CNTI-readiness: coredns [2025-06-12 23:16:49] INFO -- CNTI-readiness: Resource Deployment/coredns-coredns passed liveness?: true [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.workload_resource_test: Container result: true [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.workload_resource_test: Testing Service/coredns-coredns [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.workload_resource_test: Workload resource test intialized: true, test passed: true [2025-06-12 23:16:49] INFO -- CNTI-readiness: Workload resource task response: true [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.emoji_by_task: Task: 'readiness' emoji: ⎈🧫 [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'readiness' tags: ["resilience", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points: Task: 'readiness' type: essential [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tags_by_task: Task: 'readiness' tags: ["resilience", "dynamic", "workload", "cert", "essential"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points: Task: 'readiness' type: essential [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.upsert_task-readiness: Task start time: 2025-06-12 23:16:49 UTC, end time: 2025-06-12 23:16:49 UTC [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.upsert_task-readiness: Task: 'readiness' has status: 'passed' and is awarded: 100 points.Runtime: 00:00:00.240473661 [2025-06-12 23:16:49] DEBUG -- CNTI: resilience [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["liveness", "readiness"] for tags: ["resilience", "cert"] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["resilience", "cert"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 200, max tasks passed: 2 for tags: ["resilience", "cert"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["liveness", "readiness"] for tags: ["resilience", "cert"] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 200, total tasks passed: 2 for tags: ["resilience", "cert"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_delete", "pod_io_stress", "pod_memory_hog", "disk_fill", "pod_dns_error", "liveness", "readiness"] for tag: resilience [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 200, max tasks passed: 2 for tags: ["resilience", "cert"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1500, total tasks passed: 15 for tags: ["essential"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-06-12 23:16:49] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:16:49] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:16:49] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}]} [2025-06-12 23:16:49] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 200} [2025-06-12 23:16:49] DEBUG -- CNTI: cert [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["cert"] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1500, total tasks passed: 15 for tags: ["cert"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["cert"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["cert"] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1500, total tasks passed: 15 for tags: ["cert"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: cert [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["cert"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.total_tasks_points: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tags: ["essential"] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_tasks_points: Total points scored: 1500, total tasks passed: 15 for tags: ["essential"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["specialized_init_system", "single_process_type", "zombie_handled", "sig_term_handled", "increase_decrease_capacity", "liveness", "readiness", "hostport_not_used", "hardcoded_ip_addresses_in_k8s_runtime_configuration", "node_drain", "privileged_containers", "non_root_containers", "cpu_limits", "memory_limits", "hostpath_mounts", "log_output", "container_sock_mounts", "selinux_options", "latest_tag"] for tag: essential [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Skipped tests: [] [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Failed tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.tasks_by_tag: Found tasks: ["service_discovery", "pod_network_latency", "pod_network_corruption", "pod_network_duplication", "pod_io_stress", "operator_installed", "secrets_used", "immutable_configmap", "no_local_volume_configuration", "elastic_volumes", "linux_hardening", "immutable_file_systems", "ingress_egress_blocked", "prometheus_traffic", "open_metrics", "routed_logs", "tracing"] for tag: bonus [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Bonus tests: ["non_root_containers", "specialized_init_system", "zombie_handled"] [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: specialized_init_system -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: specialized_init_system [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: specialized_init_system is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: single_process_type -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: single_process_type [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: single_process_type is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: zombie_handled -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: zombie_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: zombie_handled is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: sig_term_handled -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: sig_term_handled [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: sig_term_handled is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: increase_decrease_capacity -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: increase_decrease_capacity [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: increase_decrease_capacity is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: liveness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: liveness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: liveness is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: readiness -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: readiness [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: readiness is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostport_not_used -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostport_not_used [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: hostport_not_used is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hardcoded_ip_addresses_in_k8s_runtime_configuration [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: hardcoded_ip_addresses_in_k8s_runtime_configuration is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: node_drain -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: node_drain [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: node_drain is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: privileged_containers -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: privileged_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: privileged_containers is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: non_root_containers -> failed: true, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: non_root_containers [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: non_root_containers is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: cpu_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: cpu_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: cpu_limits is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: memory_limits -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: memory_limits [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: memory_limits is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: hostpath_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: hostpath_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: hostpath_mounts is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: log_output -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: log_output [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: log_output is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: container_sock_mounts -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: container_sock_mounts [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: container_sock_mounts is worth: 100 points [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: selinux_options -> failed: false, skipped: NA: false, bonus: {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0} [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: selinux_options [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Task: latest_tag -> failed: false, skipped: NA: false, bonus: [2025-06-12 23:16:49] DEBUG -- CNTI-CNFManager.Points.na_assigned?: NA status assigned for task: latest_tag [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.task_points: Task: latest_tag is worth: 100 points [2025-06-12 23:16:49] INFO -- CNTI-CNFManager.Points.total_max_tasks_points: Max points scored: 1800, max tasks passed: 18 for tags: ["essential"] [2025-06-12 23:16:49] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 200, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 200} [2025-06-12 23:16:49] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 200} [2025-06-12 23:16:49] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 200} [2025-06-12 23:16:49] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800} [2025-06-12 23:16:49] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800} [2025-06-12 23:16:49] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800, "total_passed" => "15 of 18"} [2025-06-12 23:16:49] DEBUG -- CNTI: update_yml results: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800, "total_passed" => "15 of 18"} [2025-06-12 23:16:49] DEBUG -- CNTI: update_yml parsed_new_yml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800, "total_passed" => "15 of 18", "essential_passed" => "15 of 18"} [2025-06-12 23:16:49] INFO -- CNTI: results yaml: {"name" => "cnf testsuite", "testsuite_version" => "v1.4.4", "status" => nil, "command" => "/usr/local/bin/cnf-testsuite cert", "points" => 1500, "exit_code" => 0, "items" => [{"name" => "increase_decrease_capacity", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "node_drain", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "privileged_containers", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "non_root_containers", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "cpu_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "memory_limits", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hostpath_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "container_sock_mounts", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "selinux_options", "status" => "na", "type" => "essential", "points" => 0}, {"name" => "hostport_not_used", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "hardcoded_ip_addresses_in_k8s_runtime_configuration", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "latest_tag", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "log_output", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "specialized_init_system", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "single_process_type", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "zombie_handled", "status" => "failed", "type" => "essential", "points" => 0}, {"name" => "sig_term_handled", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "liveness", "status" => "passed", "type" => "essential", "points" => 100}, {"name" => "readiness", "status" => "passed", "type" => "essential", "points" => 100}], "maximum_points" => 1800, "total_passed" => "15 of 18", "essential_passed" => "15 of 18"} 2025-06-12 23:16:49,750 - functest_kubernetes.cnf_conformance.conformance - WARNING - non_root_containers failed 2025-06-12 23:16:49,751 - functest_kubernetes.cnf_conformance.conformance - WARNING - specialized_init_system failed 2025-06-12 23:16:49,751 - functest_kubernetes.cnf_conformance.conformance - WARNING - zombie_handled failed 2025-06-12 23:16:49,753 - functest_kubernetes.cnf_conformance.conformance - INFO - +-------------------------------------------------------------+----------------+ | NAME | STATUS | +-------------------------------------------------------------+----------------+ | increase_decrease_capacity | passed | | node_drain | passed | | privileged_containers | passed | | non_root_containers | failed | | cpu_limits | passed | | memory_limits | passed | | hostpath_mounts | passed | | container_sock_mounts | passed | | selinux_options | na | | hostport_not_used | passed | | hardcoded_ip_addresses_in_k8s_runtime_configuration | passed | | latest_tag | passed | | log_output | passed | | specialized_init_system | failed | | single_process_type | passed | | zombie_handled | failed | | sig_term_handled | passed | | liveness | passed | | readiness | passed | +-------------------------------------------------------------+----------------+ 2025-06-12 23:16:49,872 - xtesting.ci.run_tests - INFO - Test result: +-----------------------+------------------+------------------+----------------+ | TEST CASE | PROJECT | DURATION | RESULT | +-----------------------+------------------+------------------+----------------+ | cnf_testsuite | functest | 05:47 | PASS | +-----------------------+------------------+------------------+----------------+ 2025-06-12 23:16:50,174 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite cnf_uninstall cnf-config=example-cnfs/coredns/cnf-testsuite.yml Successfully uninstalled helm deployment "coredns". All CNF deployments were uninstalled, some time might be needed for all resources to be down. 2025-06-12 23:17:03,108 - functest_kubernetes.cnf_conformance.conformance - INFO - cnf-testsuite uninstall_all cnf-config=example-cnfs/coredns/cnf-testsuite.yml CNF uninstallation skipped. No CNF config found in installed_cnf_files directory.  Uninstalling testsuite helper tools. Testsuite helper tools uninstalled.