top of page
microchip-ai (1).png
AI Security

AI
Security

When companies deploy LLM/MMLM and RAG-based AI assistants and agents, they face various security issues, especially in on-premises or hybrid environments. Operating AI agents on-premises can reduce the risk of data leakage. However, there remains the risk that sensitive information entered by internal users into LLM could be exposed to other users through model responses, or that incorrect configurations could result in internal secrets being stored and learned within the model, potentially leading to prompt injection. Daewon CTS proposes guidelines for LLM utilization and measures to strengthen security controls through its AI security partner ecosystem.

Contact Us
cyber-security-concept-digital-art.jpg

Security tailored to the unique characteristics of AI is required.

Traditional security operating systems and control measures are primarily focused on policy-based access control, log monitoring, and structured asset-centric protection strategies. However, these techniques have limitations when applied directly to LLM/RAG-centric AI environments.

Moreover, existing security operations and security control systems are insufficient to accommodate the characteristics of AI system environments and workloads.

microchip-ai (1).png
Challenge

Key challenges
in AI security.

Limitations of policy-based
access control
  • Traditional enterprise security operates by allowing or blocking resource access based on predefined roles and rules.

  • In AI assistant environments, resources are accessed through natural language queries and generated responses, creating pathways where users—despite lacking direct permissions—can indirectly query LLMs to obtain summaries or inferred information derived from sensitive data.

  • Existing IAM policies are insufficient to precisely define which information a model may reference based on who asks what type of question.

  • This creates gray areas not covered by traditional authorization models, which attackers can exploit to prompt LLMs into revealing unauthorized information.

Limitations of security strategies focused on structured assets.
  • Traditional security focuses on protecting asset boundaries such as servers, endpoints, databases, and network equipment.

  • LLMs learn from a large volume of internal enterprise documents and encapsulate the organization’s overall knowledge base, making it difficult to control access to knowledge using per-document file ACLs alone and blurring traditional asset boundaries.

  • LLMs can perform a wide range of tasks depending on input prompts, which may introduce new and unforeseen vulnerabilities.

  • Existing security policies struggle to account for the diverse execution contexts and behaviors of LLMs.

Limitations of
log-based monitoring
  • Interactions in LLM environments are composed of unstructured text, allowing sensitive information to be exchanged in ways that traditional SIEM rules or DLP solutions cannot easily detect.

  • Logs typically capture only API calls or conversation IDs, without understanding the sensitivity of the actual content. Even when full conversation logs are recorded, it is difficult to determine whether the information constitutes enterprise secrets, and the volume of logs makes manual monitoring impractical.

  • Even when indicators of compromise are logged, they are difficult to detect using rule-based approaches.

  • Without content monitoring and filtering techniques specialized for LLMs, traditional log-based detection is highly likely to miss LLM-related security incidents.

Uncertainty in model behavior and management challenges
  • The non-deterministic nature of LLMs makes it difficult to apply consistent security policies.

  • LLM behavior can change with version updates or fine-tuning, meaning that even previously security-validated models may introduce new vulnerabilities after updates.

  • While traditional software patching relies on CVEs to assess security impact, changes in LLMs are embedded within internal model weights, making it difficult to analyze the security implications of such changes.

  • When dealing with non-deterministic generative AI services and assets, traditional security operations centered on fixed policies and structured assets reveal clear limitations.

microchip-ai (1).png
Service

Optimization services
DIA NEXUS focuses on.

time-forward 1

AI infrastructure
platform security

Proposing secure measures such as building an isolated internal AI infrastructure, processing all data internally, hosting models in isolated containers, blocking external connectivity, and containing system impact when malicious code is detected.

time-forward 1

Trustworthy
prompt design

Proposing approaches to incorporate security and trustworthiness from the prompt engineering stage, verify the integrity of prompt flows, and detect privilege-escalation attempts when LLM agents invoke plugins/tools.

time-forward 1

Filtering
policies

Proposing automated filtering layers that enforce strict input validation and normalization, detect prohibited or hidden instructions, and prevent sensitive information exposure in model inputs and outputs.

bottom of page