CN

The next-gen high-efficiency edge AI chip designed for LLM inference, featuring CIM architecture for privacy-focused local processing, ultra-compact design for space-constrained devices, and multi-chip scalability - enabling text-to-image, real-time transcription, and AI agent capabilities across tablets, PCs and edge devices.

Specifications
Data Type
INT8 / INT16 / FP16
FP32 / bFP16 / bFP24
Physical Computing Power
160 TOPS @INT8
100 TFLOPS @bFP16
Typical Power Consumption
10W
Memory Capacity
Max to 48GB LPDDR5
Memory Bit Width
Max to 192-bit
Memory Bandwidth
Max to 153.6 GB/s
Host Channel
PCIe Gen4 X4
Inter-Chip Interconnection
HM-Link, 16GB/s
Power Management
Low Power MCU
Specifications
Data Type
INT8 / INT16 / FP16 / FP32 / bFP16 / bFP24
Physical Computing Power
160 TOPS @INT8, 100 TFLOPS @bFP16
Typical Power Consumption
10W
Memory Capacity
Max to 48GB LPDDR5
Memory Bit Width
Max to 192-bit
Memory Bandwidth
Max to 153.6 GB/s
Host Channel
PCIe Gen4 X4
Inter-Chip Interconnection
HM-Link, 16GB/s
Power Management
Low Power MCU