.png)
Business Area
Solve problems
with optimized solutions.
To effectively address the complexity of developing and deploying AI models consistently across accelerator environments with different architectures—such as GPU, NPU, LPU, and VPU—it is essential to integrate various tools and data platforms into a comprehensive toolchain. This approach resolves issues caused by differing optimization methods and compatibility requirements across accelerators, including management complexity, high operational costs, and long deployment times.

MLOps / LLMOps
MLOps
Leveraging an MLOps platform, we provide a consistent environment for AI model development, deployment, and management across diverse accelerators such as GPU, NPU, and LPU.

xPU Dev
Model
Optimization
We provide customized AI model optimization considering the diverse xPU architectures of edge devices.

GPU / NPU / LPU
SDK
We provide tools that enable easy deployment of AI models across various AI accelerators, allowing customers to quickly optimize and operate existing models on NPUs, GPUs, LPUs, and more.

Performance
Performance
Optimazation
Providing advisory services for stable and trustworthy AI operations by continuously monitoring service performance and conducting security vulnerability assessment and management.
.png)
DIANEXUS Summmit 2024
DIA NEXUS
Summit 2024
To effectively address the complexity of developing and deploying AI models consistently across accelerator environments with different architectures—such as GPU, NPU, LPU, and VPU—it is essential to integrate various tools and data platforms into a comprehensive toolchain. This approach resolves issues caused by differing optimization methods and compatibility requirements across accelerators, including management complexity, high operational costs, and long deployment times.

Partner Ecosystem
DIA NEXUS
Partner Ecosystem
Daewon CTS is expanding partnerships with industry leaders that provide AI chips (xPUs) of various architectures and development tools and platforms for rapid intelligent edge service development, building a technology ecosystem for intelligent control businesses. Through this, we deliver optimal solutions and service packages tailored to customer needs.
.png)
Partner Ecosystem
TEN
MLOps
AI Pub, TEN’s MLOps platform, is an integrated solution that efficiently manages the entire lifecycle of AI models from development to operation. By precisely allocating GPU resources, it maximizes resource efficiency and reduces costs, while enabling clear management of AI infrastructure per user and team. Its intuitive UI allows anyone to easily create and manage AI services, with support for seamless service updates and version control. Additionally, it provides real-time service monitoring and security vulnerability checks, ensuring a stable and secure AI operational environment.

.png)
.png)
Partner Ecosystem
SDK
Drop & Play
DeepX’s DXNN compiler is a tool that enables AI models to run quickly and efficiently on NPU chips. DXNN is compatible with widely used AI frameworks like PyTorch and TensorFlow, allowing developers to easily transfer pre-built AI models to NPU chips—almost like snapping together LEGO blocks. This development convenience enables rapid AI-powered product creation, such as adding intelligence to robots, without the need for additional development or complex configuration.

.png)
Partner Ecosystem
NetsPresso
Platform
NetsPresso is a solution for AI model development and optimization. It optimizes AI models by considering the characteristics of various xPU architectures, including NPU, Arm, x86, and Jetson, improving model performance while reducing memory usage and power consumption. Additionally, NetsPresso supports a wide range of AI models, flexibly meeting edge AI deployment requirements for enterprises and public institutions.

.png)
Partner Ecosystem
Tarantula Lakehouse
Tarantula DB
NetsPresso is a solution for AI model development and optimization. It optimizes AI models by considering the characteristics of various xPU architectures, including NPU, Arm, x86, and Jetson, improving model performance while reducing memory usage and power consumption. Additionally, NetsPresso supports a wide range of AI models, flexibly meeting edge AI deployment requirements for enterprises and public institutions.
.png)
.png)
Partner Ecosystem
Data Platform
VAST Data
In increasingly complex data environments involving AI/ML, deep learning, generative AI, and HPC, the VAST Data platform offers innovative data processing and management solutions. This enables enterprises to build and operate data pipelines for AI and advanced analytics across on-premises, hybrid, and multi-cloud environments, aligned with the latest technologies and requirements.
Leader in the era of
AI data platforms.
VAST DataEngine
VAST DataEngine is a serverless, event-driven data platform. It is designed to automatically execute specific tasks whenever events occur that create, modify, or delete data.
VAST DataBase
VAST DataBase is a high-speed database system capable of handling large-scale data and ultra-fast queries. It is designed to search and analyze data much more quickly than conventional databases.
VAST DataStore
VAST DataStore is a storage system designed to securely and efficiently store large volumes of data. It delivers superior data protection and management performance compared to traditional storage systems.
VAST DataSpace
VAST DataSpace is a unified data management environment that enables fast access to data from anywhere in the world. It delivers local-like access speeds while ensuring consistent performance even when accessing physically distributed data.
.png)
