2026 CES Observation Igniting a New Era of AI Computing Power
The 2026 Consumer Electronics Show (CES) kicked off in Las Vegas, the United States, on the 6th, with new artificial intelligence applications and robotics technology emerging as the biggest highlights of this year's exhibition. By contrast, the core trend this year is that AI has been deeply integrated into hardware, evolving from a "showcase feature" to a "core engine" driving product innovation.
I. NVIDIA Vera Rubin: Paradigm Shift in AI Computing Power from "Training" to "Inference"
NVIDIA's launch of the Vera Rubin AI computing platform at CES 2026 marks its strategic transformation from a traditional GPU architecture to full-stack co-design. The platform consists of six chips, enabling end-to-end optimization covering computing, networking, storage, and security.
- Vera CPU: An 88-core ARM architecture server chip supporting the third generation of trusted computing, with doubled I/O bandwidth, specifically designed to drive inference workloads for AI agents.
- Rubin GPU: Equipped with the third-generation Transformer Engine, it delivers 50 PFLOPS of NVFP4 inference computing power (a 5x increase over the Blackwell architecture) with only a 60% rise in transistor count. Dynamic adjustment of computing paths via MVFP4 tensor cores achieves a balance between precision and efficiency.
- 6th-Generation NVLink Switch Chip: A single chip provides 400Gb/s bandwidth, enabling 240TB/s interconnection bandwidth for 144 GPUs within a rack—surpassing the total cross-section bandwidth of the global Internet.
- Full-Stack Collaborative Components: The BlueField-4 DPU manages AI context memory, the Spectrum-X switch adopts silicon photonics technology to reduce energy consumption, and the ConnectX-9 network card supports 1.6TB/s RDMA acceleration.
The platform has entered mass production and will be deployed by cloud service providers such as Microsoft Azure and AWS in the second half of 2026, with the goal of reducing inference costs to 1/10 of those of the Blackwell platform.
II. AMD: Full-Stack AI Coverage, Dual Breakthroughs in "Performance-Energy Efficiency" from Gaming to the Edge
AMD has built an AI ecosystem spanning the cloud to endpoints through its Ryzen AI 400 series and Ryzen 79850X3D processor.
- Ryzen AI 400 Series: Integrates an XDNA 2 NPU with 60 TOPS of computing power and support for 8533MHz memory, representing a 9% NPU performance increase over the previous generation. Mobile devices feature a unified memory architecture, and the 12-core Zen5 CPU + RDNA 3.5 GPU can run large models with 100 billion parameters locally.
- Ryzen 79850X3D: Boasts a Zen5 architecture + 104MB cache, with a boost clock of 5.6GHz. It leads the Intel Core Ultra 9 285K by 27% in gaming performance and delivers an 80% improvement in multi-tasking efficiency.
- FSR Redstone Technology: Integrates AI ray reconstruction and frame generation, boosting 4K ray-traced gaming frame rates by 4.7x and supporting over 200 games including Cyberpunk 2077.
AMD also launched ROCm 7.2 simultaneously, unifying the development experience across Windows and Linux, and increasing AI inference efficiency by 5x compared to the previous generation.
III. Qualcomm Flexion IQ10: Defining the "Robot Brain" to Drive the Commercialization of Physical AI
Qualcomm unveiled the Flexion IQ10 series of robot processors, specifically designed for industrial Autonomous Mobile Robots (AMRs) and humanoid robots.
- Hardware Architecture: Features an 18-core Oryon CPU (5x performance increase over the previous generation) + Adreno GPU + Hexagon NPU, with 700 TOPS of AI computing power and support for concurrent processing of 20 camera feeds.
- Software Ecosystem: Integrates a Vision-Language-Action (VLA) model, supports motion planning and a real-time safety subsystem, and complies with the SIL3 industrial safety standard.
- Partnerships: Collaborates with Figure to develop general-purpose humanoid robots, KUKA Robotics to explore next-generation industrial solutions, and Chinese customers have developed service robots based on this platform.
At the CES show, Qualcomm demonstrated the VinMotion humanoid robot equipped with the IQ9 series, whose edge AI capabilities enable autonomous obstacle avoidance and task execution in complex environments.
IV. Intel 18A Process Core Ultra 300 Series: A New Paradigm for "Battery Life-Performance" in AI PCs
Intel's Core Ultra 300 series based on the 18A process redefines the technological roadmap for AI PCs.
- Architectural Innovation: Combines a 16-core Compute Tile (4P + 8E + 4LPE) + 12-core GPU Tile, supports LPDDR5x 9600 memory, and features an NPU with 180 TOPS of computing power—doubling AI performance over the previous generation.
- Graphics Performance: Integrates the Arc B390 integrated graphics with 12 Xe cores. DLSS 3.5 frame generation technology brings gaming frame rates close to that of the RTX 4050, and 2x upscaling ray tracing efficiency is improved by 77%.
- Battery Life Breakthrough: Devices with a 99Whr battery achieve 27 hours of battery life, and multi-core performance at 90W TDP surpasses the AMD Ryzen AI 9 HX370 by 10%.
The series has entered mass production, and manufacturers such as ASUS and Lenovo will launch the first products in Q1 2026.
V. Industry Implications: AI Computing Power Competition Enters a New Stage of "System-Level Innovation"
- NVIDIA's "Physical AI" strategy, through full-stack restructuring of chips, networking, and storage, compresses AI inference costs to 1/10 and drives a 4x improvement in Mixture-of-Experts (MoE) model training efficiency.
- AMD's "edge-cloud collaboration" creates a differentiated advantage through the Ryzen AI 400 series and ROCm ecosystem, with NPU computing power density leading Intel by 12%.
- Qualcomm's "edge intelligence" breakthrough: the Flexion IQ10, with 700 TOPS of computing power and automotive-grade energy efficiency, seizes the first-mover advantage in the industrial robot market.
- Intel's return to "process technology": the 18A process achieves a transistor density 3.8x that of TSMC's N3E, providing a new paradigm for balancing battery life and performance in AI PCs.
The global AI server market size will exceed $300 billion in 2026, with inference workloads accounting for over 60% of the total. NVIDIA's Vera Rubin and AMD's Ryzen AI series are likely to be the biggest beneficiarie

