top of page
AI 데이터센터 컴퓨팅 인프라를 표현한 회로 패턴 배경 이미지
Compute

Compute

Daewon CTS provides high-performance servers, intelligent edge solutions, large-scale clusters, and AI data center networking solutions for AI workloads.

Contact Us
AI 데이터센터 컴퓨팅 인프라를 표현한 회로 패턴 배경 이미지
엔터프라이즈 AI 컴퓨팅 인프라용 반도체 그래픽
Business Area

From AI servers
to hybrid servers.

We offer a wide range of options, from AI servers from leading global companies to edge and inference servers equipped with NPUs and LPUs, as well as custom hybrid servers configured with GPUs, NPUs, and LPUs tailored to specific workload characteristics.

AI 데이터센터 컴퓨팅 인프라를 표현한 회로 패턴 배경 이미지

GPU Server

GPU Server

GPU server supply from KAYTUS, Supermicro, Dell, and Renovo.

AI 데이터센터 컴퓨팅 인프라를 표현한 회로 패턴 배경 이미지

LPU Server

LPU Server

Providing servers equipped with HyperAccel LPUs.

AI 데이터센터 컴퓨팅 인프라를 표현한 회로 패턴 배경 이미지

NPU Server

NPU Server

Providing servers equipped with DEEPX NPUs.

AI 데이터센터 컴퓨팅 인프라를 표현한 회로 패턴 배경 이미지

Hybrid Server

Hybrid Server

Providing hybrid servers, such as GPU + NPU or GPU + LPU configurations, tailored to customer requirements.

엔터프라이즈 AI 컴퓨팅 인프라용 반도체 그래픽
Why Choose Us

Why DIA NEXUS's
AI Server Proposal is unique.

AI infrastructure design
  • We propose customized AI infrastructure design plans by comprehensively considering a customer’s AI transformation (AX) strategy, organizational size, budget, and current computing environment. For small and medium-sized enterprises, we recommend scalable infrastructure to minimize initial costs. For large enterprises, we offer flexible hybrid architectures that can expand computing power and storage capacity. Additionally, we incorporate platforms with user-friendly interfaces and automation features from the infrastructure design stage.

Performance & Capacity
  • With the rise of the generative AI era, many customers are showing interest not only in AI/ML models but also in LLMs (Large Language Models), SMLs (Small Language Models) optimized for specific industries or tasks, and ultra-lightweight models for edge devices. Daewon CTS evaluates the size and intended use of the selected models to propose appropriate performance and capacity specifications for large clusters, high-performance servers, workstations, edge servers, and edge devices.

Integrated technical support
  • AI infrastructure represents a high-investment, critical computing environment, making it essential to maintain optimal resource utilization and efficiency. Daewon CTS offers all the solutions needed to build the AI infrastructure stack, including servers, storage, networking, and MLOps platforms. Technical support for AI infrastructure and equipment designed and deployed by Daewon CTS is provided through a single channel, ensuring fast and reliable service.

엔터프라이즈 AI 컴퓨팅 인프라용 반도체 그래픽
DIANEXUS Summmit 2024

DIA NEXUS
Summit 2024

AI 데이터센터 컴퓨팅 인프라를 표현한 회로 패턴 배경 이미지
Partner Ecosystem

DIA NEXUS
Partner Ecosystem

DAEWON CTS is expanding partnerships with leading global server, network equipment, and storage companies to propose tailored solutions that enhance AI infrastructure investment efficiency and resource utilization in line with diverse customer needs.

AI GPU 서버

High-performance AI servers

KAYTUS offers a variety of server options for high-performance AI training and inference. Servers combining NVIDIA GPUs with Intel or AMD processors are optimized for deep learning, large-scale model training, and AI inference, providing high flexibility and scalability.

Intelligent edge solutions

KAYTUS edge servers are optimized for high-performance AI computing, suitable for various industries and complex application scenarios. With strong storage scalability and configurable GPU options, they can be flexibly adapted to on-site environments.

Large-scale clusters and cooling solution.

KAYTUS collaborates with partners to support the design and deployment of large-scale clusters tailored to customer specifications. In this process, it customizes optimal hardware and network infrastructure for AI and data analytics to ensure cluster stability and performance.

케이투스 로고

KAYTUS is a leading IT solutions provider in the AI era, offering a variety of AI servers and cloud solutions tailored to customer needs. As a global leader in AI computing solutions, KAYTUS supports the optimization of IT resources and the successful adoption of AI.

GPU NPU VPU

Diverse
product lines

Supermicro offers a broad range of server products supporting AI, deep learning, cloud data centers, and edge computing. Customers can choose from high-density SuperBlade systems, 2U and 4U server form factors, SuperEdge systems for edge deployments, and H100 AI superclusters, providing flexible options to meet diverse requirements.

슈퍼마이크로

Supermicro is a global company providing high-performance computing and data center infrastructure solutions. By leveraging its diverse infrastructure offerings, organizations can flexibly meet the demands of AI/ML, LLMs, HPC, and other workloads.

Plug-and-play
rack scale

Supermicro’s rack-scale solutions are turnkey offerings that cover design, testing, and installation. They provide optimized designs for detailed aspects such as rack layout, power budget, and network configuration, and ensure high-quality systems through testing and validation tailored to diverse workloads.

Liquid cooling system

To efficiently manage the high heat generated in AI and high-performance computing environments, Supermicro provides a variety of liquid cooling solutions. Direct liquid-to-liquid cooling and immersion cooling technologies maintain CPU and GPU performance while achieving high energy efficiency.

레노버

AI-optimized servers

Dell’s PowerEdge servers combine NVIDIA GPUs with the latest Intel Xeon and AMD EPYC processors, delivering exceptional performance for AI model training and inference. Flagship models like the PowerEdge XE9680 are optimized for compute-intensive workloads, including large-scale deep learning and generative AI, and feature proprietary Smart Flow technology to enhance cooling efficiency and reduce power consumption.

Servers for
Edge AI

Dell supports real-time AI inference and analytics at the edge where data is generated through PowerEdge XR rugged servers and the Dell NativeEdge remote management platform. Additionally, with APEX services, AI infrastructure is provided as-a-service, enabling rapid deployment and flexible scaling of consistent AI environments from the edge to the cloud without significant upfront costs.

Servers for
​Edge AI

Dell supports large-scale GPU cluster deployment even in limited spaces through its ultra-dense rack-integrated solutions (IR5000 series). It offers both air and liquid cooling options to maximize GPU density, and innovative cooling technologies like Smart Flow significantly enhance data center energy efficiency and operational convenience.

Dell

Dell Technologies, a global IT leader, meets diverse AI adoption and deployment needs by offering high-performance AI servers, flexible edge solutions, and high-density, efficient cooling technologies.

아리스타 스위

Future-oriented
AI servers

The Lenovo ThinkSystem server platform, equipped with the latest Intel Xeon processors and NVIDIA GPUs, maximizes AI computing performance even in compact form factors. The ThinkSystem V4 series supports configurations with up to eight GPUs, delivering exceptional compute density and energy efficiency, and notably supports NVIDIA Grace Blackwell superchips, enabling the deployment of future-oriented AI infrastructure.

Servers
for edge AI

The ThinkEdge product line, including the Lenovo ThinkEdge SE455 V3, delivers optimal edge AI solutions for real-time data processing and analytics in on-site environments. With robust performance and durability, it enables rapid AI inference across various settings such as manufacturing sites, smart cities, and retail, while TruScale pay-as-you-go services support flexible and cost-effective IT infrastructure scaling.

Large-scale
AI clusters

Lenovo’s Neptune liquid cooling technology dramatically improves high heat and power consumption challenges in large-scale AI cluster deployments. It efficiently cools critical components such as CPUs and GPUs, reducing data center operating costs by up to 40% and enabling the creation of sustainable high-performance AI infrastructure.

레노버

Lenovo leads AI innovation and sustainable technology adoption by offering high-density AI server platforms with the latest technologies, on-site edge AI solutions, flexible pay-as-you-go services, and high-performance computing environments utilizing the industry-leading Neptune liquid cooling technology.

bottom of page