.png)
Business Area
From data centers
to edge devices.
We support AI training and inference from data centers to edge devices using AI accelerators from leading global companies such as HyperAccel, DEEPX, NVIDIA, and AMD.

Central Processing Unit
CPU
Offering a range of GPU-based servers with the latest Intel and AMD CPUs to give customers more choices.

Neural Processing Unit
NPU
Offering NPUs optimized for diverse edge environments with ultra-low power and high performance using the DX-M1, DX-V3, and DX-H1 series.

Vision Processing Unit
VPU
Providing VPUs (Video Processing Units) suitable for services such as live streaming, security surveillance, and cloud gaming.

Graphics Processing Unit
GPU
Providing solutions to optimize AI model performance using Blackwell- and Hopper-based GPU systems.

LLM Processing Unit
LPU
Providing LLM Processing Units (LPU) that can dramatically reduce generative AI inference costs in both data center and edge environments.
.png)
Challenge
DIA NEXUS
AI Accelerator Portfolio

Recommendations tailored for training and inference purposes
Daewon CTS clearly understands these characteristics and provides accelerator solutions optimized for each specific use case.

Proposals suited to specific operational environments and conditions
Daewon CTS provides customized solutions tailored to each environment, from data center rack space and facility requirements to low-power constraints and installation space conditions at the edge.

AI infrastructure
architecture design
With expertise in designing AI infrastructure for heterogeneous accelerator environments, Daewon CTS offers optimal system architectures and management strategies aligned with customer needs.

Partner Ecosystem
DIA NEXUS
Partner Ecosystem
Daewon CTS develops adoption and implementation strategies from solution selection to integration with existing systems, establishing build and execution plans from an AI full-stack perspective.

NPU
Blocks unauthorized AI applications to ensure only approved apps are used and security policies are enforced. It also detects and immediately blocks malicious URLs, files, and content in data sent to or responses from generative AI applications.
Module
We offer DX-M1 and DX-H1 modules, designed to easily integrate AI capabilities into a variety of edge devices. This enables seamless addition of AI functionality across diverse edge environments.
DXNN
DEEPX supports custom NPU development through the DXNN SDK, and its proprietary NPUs achieve higher power efficiency and incorporate unique patented technologies compared to traditional GPUs.

DEEPX, an AI semiconductor specialist, develops NPUs and solutions for servers, edge devices, and IoT devices, and has recently expanded into the LLM inference–focused semiconductor market.

LPU
The LPU is the world’s first semiconductor optimized for generative AI inference, offering higher memory bandwidth efficiency than GPUs. It reduces cost and improves throughput by using LPDDR instead of HBM, and supports efficient inter-chip communication through scalable synchronized link technology.
ORION Appliance
The HyperAccel Orion server, powered by LPU chips, supports large LLMs with up to 132 billion parameters, delivers real-time performance of 175 tokens per second, offers a user-friendly interface, and achieves superior energy and cost efficiency compared to existing GPU systems through FPGA-based pre-validation followed by ASIC mass production.
HyperDex
HyperDex is a modular full-stack software framework that integrates hardware and AI applications, providing automatic parallelization, memory optimization, model mapping, and latency minimization. It supports the ONNX format and is offered as SaaS/PaaS, enabling customized model deployment for users.

HyperAccel is a company specializing in generative AI accelerator semiconductors and server solutions. It aims to supply innovative semiconductors that can replace GPUs in the generative AI market by being the first in the world to develop transformer-based LLM inference–optimized chips.

T432
It is a PCIe form-factor transcoder equipped with four Codensity G4 ASICs, delivering four times the processing performance of the T408 while supporting the same codecs and HDR features.
Quadra T2A
It is an AIC form-factor VPU equipped with two Codensity G5 ASICs, supporting real-time video encoding up to 8K resolution and delivering performance optimized for high-density streaming environments.

NETINT Technologies is a Canadian semiconductor company founded in 2015 that develops ASIC-based VPU SoC solutions for video encoding and AI image processing, delivering high-density, low-power, low-latency performance for cloud gaming, streaming, AR/VR, and remote desktop applications.
T408
It is a U.2 form-factor video transcoder powered by the Codensity G4 ASIC, supporting HEVC and H.264 codecs and HDR features such as HDR10 and HDR10+.
.png)
