top of page
free-icon-font-shuffle.png
AI Model

AI
Model

We provide services aligned with customers’ AI transformation (AX) roadmaps, ranging from LLM-based AI chatbots to AI agents built on LLM/MMLM and SLM, as well as agent workflow implementation. To support this, we offer consulting on major models such as Llama and Gemini, and are also engaging in discussions to validate and support various LLM models, including LG ExaOne and Saltlux Luxia.

Contact Us
abstract-cybersecurity-concept-design.jpg
microchip-ai (1).png
Business Area

Advanced AI agents,
together with DIA NEXUS.

Daewon CTS DIA NEXUS provides consulting services that offer solutions for implementing LLMs, MMLMs, and domain-specific SLMs through single prompts, prompt chaining, AI agents, and agent workflows. Based on customers’ strategies and current AI environment maturity, Daewon CTS establishes a roadmap and delivers services aimed at effectively leveraging AI in phases, ultimately building increasingly advanced AI agent environments.

automation (1) 1

AI Model

AI Model

Evaluation and provision of major models, including Meta Llama, Google Gemini, LG ExaOne, and Saltlux Luxia.

automation (1) 1

RAG

RAG

Proposing solutions for building RAG systems and optimizing performance in LLM/RAG environments.

automation (1) 1

Optimization

Optimization

Proposing optimization strategies such as model lightweighting and quantization for inference environments.

automation (1) 1

AI Agent

Agentic

Proposing implementations of full automation (AI agents) and partial automation (agent workflows) based on business workflow characteristics.

free-icon-font-shuffle.png
Partner Ecosystem

DIA NEXUS
Partner Ecosystem

Daewon CTS is expanding partnerships with industry leaders that provide AI chips (xPUs) of various architectures and development tools and platforms for rapid intelligent edge service development, building a technology ecosystem for intelligent control businesses. Through this, we deliver optimal solutions and service packages tailored to customer needs.

Rectangle 138

Exceptional performance and efficiency

The EXAONE 3.5 model is designed for efficient training and inference even on low-spec GPUs or on-device environments through lightweighting and optimization, delivering superior performance with lower computational requirements compared to competing models. Additionally, the EXAONE Deep model, based on the EXAONE 3.5 series, further enhances inference capabilities and is particularly strong in mathematics and coding tasks.

Models of various scales

The EXAONE 3.5 and EXAONE Deep models are available in three sizes tailored to different use cases: ultra-lightweight 2.4B for on-device applications, versatile 7.8B for general purposes, and high-performance 32B for frontier AI workloads. Additionally, EXAONE offers multimodal capabilities, enabling simultaneous understanding and processing of language and visual information, supporting analysis and utilization of various data types such as images and videos.

Industry-specific platforms

EXAONE Universe supports reliable expert knowledge reasoning for enterprise and professional tasks. EXAONE Discovery assists in exploring new scientific findings, such as in materials and drug development. EXAONE Atelier provides creative ideas and inspiration in art and design. Each platform is tailored with specialized capabilities for use across diverse domains

DeepX_logo_640x320 2

EXAONE, an ultra-large AI model developed by LG AI Research, is designed for “innovation for all AI” and can be applied across a wide range of industries.

Our Offering

microchip-ai (1).png
Service

DIA NEXUS’s
unique and differentiated services

conceptual-template-with-teacher-giving-lecture-robots-concept-artificial-intelligence-machine-learning-computer-science-modern-isometric-vector-illustration-website-webpage 1

API

When pursuing proof-of-concept (PoC) projects or small-scale AI initiatives using LLMs and SLMs, organizations can consider leveraging APIs such as Google Cloud’s Gemini or Naver’s Hyper CLOVA X. Daewon CTS has the capability to implement prompt-based chatbots or AI agents optimized with organizational data by utilizing public cloud API services alongside technologies and tools like RAG and Vector/Graph.

conceptual-template-with-teacher-giving-lecture-robots-concept-artificial-intelligence-machine-learning-computer-science-modern-isometric-vector-illustration-website-webpage 1

Serverless

Using public cloud PaaS–based serverless services allows organizations to start LLM and SLM projects without building AI infrastructure or MLOps tools. Serverless services provide environments where users can train, fine-tune, and run inference on AI models without managing servers directly. Daewon CTS also recommends leveraging serverless environments when high-performance computing is not required for initial training or fine-tuning, or when rapid response is needed for simple inference tasks.

conceptual-template-with-teacher-giving-lecture-robots-concept-artificial-intelligence-machine-learning-computer-science-modern-isometric-vector-illustration-website-webpage 1

On-premises

Daewon CTS provides guidance on AI infrastructure based on scalable architectures, as well as the use of MLOps and LLMOps platforms and tools, to support customers’ mid- to long-term AI transformation (AX) vision. This enables organizations to smoothly execute the full AI workload lifecycle—from data preparation and model training to fine-tuning and inference—within their security, management, regulatory, and governance frameworks

Group 220.png
microchip-ai (1).png
Service

DIA NEXUS’s
One-stop services

Group 220.png
Group 221.png
Group 222.png
free-icon-font-shuffle.png
Case Study

Case Study

Daewon CTS DIA Nexus has implemented an AI chatbot using LLMs (Large Language Models). To enhance its intelligence, Daewon CTS built a knowledge base (KB) leveraging organizational data on Vector/Graph databases. To provide a better user experience, the system was integrated with RAG and key enterprise systems and databases. Additionally, RPA was employed to automatically collect and process data, ensuring smooth interactions between the AI system and users.

그ㅇ림1.png
E그림1.png
e그림2.png
ee.png
microchip-ai (1).png
Why Choose Us

Why DIA NEXUS
is unique.

LLM and SLM evaluation

At this stage, LLMs (Large Language Models) and SLMs (Small Language Models) are evaluated for their suitability to actual business requirements. Key tasks include testing model accuracy, response time, and memory and compute resource requirements, as well as developing strategies for tuning the models to achieve optimal performance.

Data processing

If the data required for model training or fine-tuning is scattered across multiple systems, we analyze the structure and characteristics of each dataset to propose collection strategies. We provide methods for preprocessing the collected data into formats that models can understand, ensure the collection of high-quality data for meaningful LLM/SLM responses, and store it in vector format for real-time retrieval. Additionally, we guide the use of RPA to automate real-time processing by integrating various data sources such as CRM, ERP, WMS, and BI systems.

AI agent implementation

We provide consulting for building RAG (Retrieval-Augmented Generation) systems and selecting frameworks suitable for projects. Additionally, we connect LLMs and SLMs with vector databases and propose methods to seamlessly integrate queries and search results.

Management system implementation

We provide guidance on building and operating systems to continuously manage and improve AI agent performance. Using this system, customers can monitor model performance, identify areas for improvement through periodic evaluations, and update models regularly by incorporating new data or feedback. Recommendations also include methods for retraining or fine-tuning models as needed to enhance accuracy.

bottom of page