top of page
AI 데이터센터 컴퓨팅 인프라를 표현한 회로 패턴 배경 이미지
AI Chip

AI Semiconductor

Daewon CTS is expanding its partner ecosystem to encompass a variety of AI accelerators, including CPUs, NPUs, and VPUs, in addition to GPUs, and is presenting an integrated architecture that considers the use of heterogeneous chips from data centers to servers and the edge. In particular, by quickly capturing the market shift from training to inference,
We offer customers chip-based systems and operational solutions optimized for their model types and inference methods.

Contact Us
AI 데이터센터 컴퓨팅 인프라를 표현한 회로 패턴 배경 이미지
엔터프라이즈 AI 컴퓨팅 인프라용 반도체 그래픽
Market Trend

AI chip market
Trend

Training large models requires massive computing resources, energy, time, and specialized personnel, posing a significant burden for most businesses. For this reason, many companies are adopting an inference-centric investment strategy, leveraging pre-trained or smaller models to apply AI models to real-world services, rather than training large models themselves. In practice, companies are maximizing performance and efficiency by selecting optimal hardware suited to their specific needs and scale. Semiconductor companies are responding to this demand by introducing innovative AI accelerators.

DIA Nexus GPU·NPU 기반 AI 컴퓨팅 프로세서 아이콘

Neural Processing Unit

NPU

A processor optimized for artificial neural network computations, such as matrix multiplication.

LPU

A processor optimized for LLM training and inference workloads.

DIA Nexus GPU·NPU 기반 AI 컴퓨팅 프로세서 아이콘

LLM Processing Unit

VPU

A processor specialized for image and video processing and recognition tasks.

DIA Nexus GPU·NPU 기반 AI 컴퓨팅 프로세서 아이콘

Vision Processing Unit

DPU

A processor specialized for image and video processing and recognition tasks.

DIA Nexus GPU·NPU 기반 AI 컴퓨팅 프로세서 아이콘

Data Processing Unit

엔터프라이즈 AI 컴퓨팅 인프라용 반도체 그래픽
Challenge

Key challenges
DIA NEXUS focuses on.

The complexity of chip selection
  • Diversification of AI inference hardware options (GPU, CPU, TPU, NPU, VPU, FPGA, ASIC), Increasing optimal selection complexity

  • Different pros and cons of performance, power efficiency, cost, and development difficulty for each chip, and lack of a single solution.
    Optimal combination needed for each use case

  • Platform-specific SDKs, framework and model compatibility, and heterogeneous hardware compatibility.The difficulty of establishing investment and operating strategies that take into account various factors.

Operational and deployment complexity
  • Workload distribution and scheduling complexity in hardware infrastructure environments based on diverse types of AI accelerators

  • The burden of integrating multiple heterogeneous accelerators into a single architecture

  • A lack of organizations with experience in building and operating AI training and inference platforms where models are automatically executed on the optimal chip in alignment with developer intent

엔터프라이즈 AI 컴퓨팅 인프라용 반도체 그래픽
Service

Optimization services
DIA NEXUS focuses on.

AI 컴퓨팅 성능 향상과 확장을 의미하는 데이터 성장 아이콘

Elastic architecture design and infrastructure deployment.

Proposing an approach to flexibly deploy and reliably manage various types of AI accelerators from data centers to the edge through a consistent architecture.

AI 컴퓨팅 성능 향상과 확장을 의미하는 데이터 성장 아이콘

Performanceand cost optimization by scenario.

Proposing approaches to balance performance and cost by inference scenario—utilizing dedicated chips for low-latency real-time inference, GPU clusters or large CPU servers for large-scale batch inference, and low-power chip-based systems for on-device inference.

AI 컴퓨팅 성능 향상과 확장을 의미하는 데이터 성장 아이콘

Model optimization and performance tuning.

Proposing approaches to efficiently perform AI model inference by lightweighting models themselves or optimizing them for specific hardware.

abstract-89-background-wallpaper-gradient.jpg

Through a wide range of chip options, DAEWON CTS delivers flexible, optimized infrastructure tailored to model size, use cases, operational requirements, and deployment environments.

AI 데이터센터 컴퓨팅 인프라를 표현한 회로 패턴 배경 이미지
Case Study

Case Study

  • Applying NPUs to KAYTUS servers to rapidly and efficiently process large volumes of video and sensor data in intelligent control centers and data center environments

  • Deploying NPUs in edge environments to enable on-site inference execution

  • Maintaining software stack consistency across both servers and edge devices by adopting the same NPU architecture, simplifying development and operations

  • Enabling easy scalability and upgrades, even when expanding monitoring coverage or upgrading models, by leveraging a unified NPU-based framework

KAYTUS GPU 서버
bottom of page