top of page
DIA Nexus GPU·NPU 기반 AI 컴퓨팅 프로세서 아이콘
AI Accelerator

AI
Accelerator

Daewon CTS DIA NEXUS proposes AI accelerator–based systems optimized for diverse AI use cases, operational environments, and conditions.

Contact Us
GPU서버
엔터프라이즈 AI 컴퓨팅 인프라용 반도체 그래픽
Business Area

From data centers
to edge devices.

We support AI training and inference from data centers to edge devices using AI accelerators from leading global companies such as HyperAccel, DEEPX, NVIDIA, and AMD.

DIA Nexus GPU·NPU 기반 AI 컴퓨팅 프로세서 아이콘

Central Processing Unit

CPU

Offering a range of GPU-based servers with the latest Intel and AMD CPUs to give customers more choices.

DIA Nexus GPU·NPU 기반 AI 컴퓨팅 프로세서 아이콘

Neural Processing Unit

NPU

Offering NPUs optimized for diverse edge environments with ultra-low power and high performance using the DX-M1, DX-V3, and DX-H1 series.

DIA Nexus GPU·NPU 기반 AI 컴퓨팅 프로세서 아이콘

Vision Processing Unit

VPU

Providing VPUs (Video Processing Units) suitable for services such as live streaming, security surveillance, and cloud gaming.

DIA Nexus GPU·NPU 기반 AI 컴퓨팅 프로세서 아이콘

Graphics Processing Unit

GPU

Providing solutions to optimize AI model performance using Blackwell- and Hopper-based GPU systems.

DIA Nexus GPU·NPU 기반 AI 컴퓨팅 프로세서 아이콘

LLM Processing Unit

LPU

Providing LLM Processing Units (LPU) that can dramatically reduce generative AI inference costs in both data center and edge environments.

엔터프라이즈 AI 컴퓨팅 인프라용 반도체 그래픽
Challenge

DIA NEXUS
AI Accelerator Portfolio

conceptual-template-with-teacher-giving-lecture-robots-concept-artificial-intelligence-machine-learning-computer-science-modern-isometric-vector-illustration-website-webpage 1

Recommendations tailored for training and inference purposes

Daewon CTS clearly understands these characteristics and provides accelerator solutions optimized for each specific use case.

conceptual-template-with-teacher-giving-lecture-robots-concept-artificial-intelligence-machine-learning-computer-science-modern-isometric-vector-illustration-website-webpage 1

Proposals suited to specific operational environments and conditions

Daewon CTS provides customized solutions tailored to each environment, from data center rack space and facility requirements to low-power constraints and installation space conditions at the edge.

conceptual-template-with-teacher-giving-lecture-robots-concept-artificial-intelligence-machine-learning-computer-science-modern-isometric-vector-illustration-website-webpage 1

AI infrastructure
architecture design
 

With expertise in designing AI infrastructure for heterogeneous accelerator environments, Daewon CTS offers optimal system architectures and management strategies aligned with customer needs.

DIA Nexus GPU·NPU 기반 AI 컴퓨팅 프로세서 아이콘
Partner Ecosystem

DIA NEXUS
Partner Ecosystem

Daewon CTS develops adoption and implementation strategies from solution selection to integration with existing systems, establishing build and execution plans from an AI full-stack perspective.

DeepX NPU
NPU

Blocks unauthorized AI applications to ensure only approved apps are used and security policies are enforced. It also detects and immediately blocks malicious URLs, files, and content in data sent to or responses from generative AI applications.

Module

We offer DX-M1 and DX-H1 modules, designed to easily integrate AI capabilities into a variety of edge devices. This enables seamless addition of AI functionality across diverse edge environments.

DXNN

DEEPX supports custom NPU development through the DXNN SDK, and its proprietary NPUs achieve higher power efficiency and incorporate unique patented technologies compared to traditional GPUs.

DeepX_logo

DEEPX, an AI semiconductor specialist, develops NPUs and solutions for servers, edge devices, and IoT devices, and has recently expanded into the LLM inference–focused semiconductor market.

AI 데이터센터 컴퓨팅 인프라를 표현한 회로 패턴 배경 이미지
LPU

The LPU is the world’s first semiconductor optimized for generative AI inference, offering higher memory bandwidth efficiency than GPUs. It reduces cost and improves throughput by using LPDDR instead of HBM, and supports efficient inter-chip communication through scalable synchronized link technology.

ORION Appliance

The HyperAccel Orion server, powered by LPU chips, supports large LLMs with up to 132 billion parameters, delivers real-time performance of 175 tokens per second, offers a user-friendly interface, and achieves superior energy and cost efficiency compared to existing GPU systems through FPGA-based pre-validation followed by ASIC mass production.

HyperDex

HyperDex is a modular full-stack software framework that integrates hardware and AI applications, providing automatic parallelization, memory optimization, model mapping, and latency minimization. It supports the ONNX format and is offered as SaaS/PaaS, enabling customized model deployment for users.

하이퍼엑셀 로고

HyperAccel is a company specializing in generative AI accelerator semiconductors and server solutions. It aims to supply innovative semiconductors that can replace GPUs in the generative AI market by being the first in the world to develop transformer-based LLM inference–optimized chips.

AI 데이터센터 컴퓨팅 인프라를 표현한 회로 패턴 배경 이미지
T432

It is a PCIe form-factor transcoder equipped with four Codensity G4 ASICs, delivering four times the processing performance of the T408 while supporting the same codecs and HDR features.

Quadra T2A

It is an AIC form-factor VPU equipped with two Codensity G5 ASICs, supporting real-time video encoding up to 8K resolution and delivering performance optimized for high-density streaming environments.

NETINT

NETINT Technologies is a Canadian semiconductor company founded in 2015 that develops ASIC-based VPU SoC solutions for video encoding and AI image processing, delivering high-density, low-power, low-latency performance for cloud gaming, streaming, AR/VR, and remote desktop applications.

T408

It is a U.2 form-factor video transcoder powered by the Codensity G4 ASIC, supporting HEVC and H.264 codecs and HDR features such as HDR10 and HDR10+.

bottom of page