Nvidia to License NVLink Fusion Tech to Boost AI Chip Communication

234
20 May 2025
4 min read

News Synopsis

Nvidia announced on Monday that it will start licensing its cutting-edge chip interconnect technology, NVLink Fusion, to third-party companies. This move aims to improve chip-to-chip communication, a critical element in the development of advanced AI systems.

Tech Giants Onboard: Marvell and MediaTek Join In

The new iteration of NVLink, called NVLink Fusion, facilitates seamless cooperation between multiple chips within custom-built AI systems. Marvell Technology and MediaTek have already pledged to incorporate NVLink Fusion in their chip development projects.

From Graphics to AI Powerhouse

Originally introduced years ago, NVLink was developed to enable high-speed data transfers between chips. It is currently featured in Nvidia’s GB200 system, which integrates two Blackwell GPUs alongside a Grace CPU.

Nvidia CEO Showcases Vision at Computex 2025

Nvidia CEO Jensen Huang unveiled NVLink Fusion during his keynote speech at Computex, held at the Taipei Music Center from May 20 to 23. Computex is one of the largest global tech expos, spotlighting innovation in computing and chip design.

“There was a time when 90% of my presentations focused on graphics chips,” Huang noted.

He emphasized the company’s transition from gaming GPUs to a dominant player in AI chip development, spurred by the success of AI tools like ChatGPT, which debuted in 2022.

Nvidia to Build New Headquarters in Taiwan

In addition to the tech announcement, Huang revealed Nvidia’s plans to establish a new headquarters in northern Taipei, strengthening the company’s footprint in Asia.

DGX Spark and Next-Gen Chip Plans

At the Computex event, Huang confirmed that Nvidia's desktop AI system, DGX Spark, is in full production and will be available within weeks. This system targets AI researchers and is expected to bring desktop-scale AI development into the mainstream.

Roadmap for Nvidia’s Future AI Chips

In March, during Nvidia’s annual developer conference, Huang shared an ambitious roadmap. The upcoming Blackwell Ultra chips are slated for launch later this year. Following that, the company will roll out:

  • The Rubin chip series,

  • And the Feynman processors by 2028.

These next-gen chips signal Nvidia's shift from just powering large-scale AI models to enabling AI-driven applications across industries.  

Nvidia's Strategic Focus Amid Global Trade Moves

Computex 2025, featuring 1,400 exhibitors, also holds significance as the first major tech event in Asia since the US floated sweeping tariffs to promote domestic semiconductor manufacturing. Nvidia’s announcements at the event reflect its adaptive global strategy.

About Nvidia

NVIDIA Corporation is an American multinational technology company headquartered in Santa Clara, California. Founded in 1993, it has grown to become a global leader, particularly renowned for its innovation in Graphics Processing Units (GPUs) and its pivotal role in the rise of artificial intelligence (AI).

Here's a comprehensive overview of NVIDIA:

1. History and Founding of NVIDIA:

  • Founders: NVIDIA was founded on April 5, 1993, by Jensen Huang (who remains its President and CEO as of 2025), Chris Malachowsky, and Curtis Priem. The idea was famously conceived at a Denny's diner in San Jose.

  • Initial Focus: The company's initial vision was to bring 3D graphics to the gaming and multimedia markets.

  • Key Milestones:

    • 1995: Released NV1, one of the first 3D accelerator processors.

    • 1999: Invented the GPU (Graphics Processing Unit) with the launch of the GeForce 256, which fundamentally reshaped the computing industry and sparked the growth of the PC gaming market. NVIDIA also went public on NASDAQ in the same year.

    • 2006: Unveiled CUDA (Compute Unified Device Architecture), a parallel computing platform and programming model that allowed developers to use GPUs for general-purpose computing. This was a seminal step towards later AI and deep learning technology.

    • 2012: Sparked the era of modern AI by powering the breakthrough AlexNet neural network.

    • 2018: Reinvents computer graphics with NVIDIA RTX™, the first GPU capable of real-time ray tracing.

    • 2020: Introduced the Ampere architecture for GPUs, significantly boosting AI and gaming performance.

    • 2022: Announced the Hopper architecture (e.g., H100 GPU), specifically designed for AI and data center advancements.

    • 2023: Became the seventh public U.S. company to be valued at over $1 trillion, driven by the immense demand for its data center chips amidst the AI boom.

    • 2024: Introduced the Blackwell architecture (e.g., GB200, GB300), poised to be the engine of the new industrial revolution, specifically targeting enterprise AI factories.

    • May 2025 (Computex): CEO Jensen Huang made significant announcements, including the launch of NVLink Fusion (allowing other chip designers to build powerful custom AI systems using NVIDIA's interconnectivity), DGX Spark (a desktop AI PC), and plans to build a new Taiwan headquarters.

2. Products and Solutions: NVIDIA's product portfolio is diverse and spans several high-growth markets:

  • Graphics Processing Units (GPUs):

    • GeForce Series: Primarily for consumer gaming and content creation (RTX series with ray tracing and AI capabilities like DLSS).

    • Quadro / RTX A/PRO Series: Professional GPUs for design, engineering, scientific visualization, media & entertainment, and AI workstations.

  • Data Center & AI: This is currently NVIDIA's fastest-growing and most strategic segment.

    • NVIDIA H100/H200 and Blackwell Series (GB200/GB300) GPUs: The core of AI training and inference.

    • NVIDIA DGX Systems: Integrated AI supercomputers (e.g., DGX Cloud, DGX Station, DGX SuperPOD) for enterprise AI development and deployment.

    • NVIDIA Grace CPU & Grace Hopper Superchips: Combining high-performance CPUs with GPUs for demanding AI and HPC workloads.

    • NVIDIA Mellanox (Networking): High-speed networking solutions (InfiniBand, Ethernet) critical for connecting massive GPU clusters in data centers.

    • NVIDIA BlueField DPUs (Data Processing Units): Offloading data processing tasks to free up CPUs and accelerate data center operations.

  • Software Platforms:

    • CUDA: The foundational parallel computing platform and programming model for GPUs.

    • NVIDIA AI Enterprise: An end-to-end software suite for AI development and deployment.

    • NVIDIA NIM: Microservices for optimizing AI inference performance and agent accuracy.

    • NVIDIA NeMo: Framework for building, customizing, and deploying generative AI models.

    • NVIDIA Omniverse™: A platform for 3D design collaboration, digital twin creation, and building metaverse applications.

    • NVIDIA DRIVE: A comprehensive AI platform for autonomous vehicles, encompassing hardware, software, and simulation environments.

    • NVIDIA Isaac: A platform for accelerating and enhancing robotics.

  • Embedded Systems: Tegra series SoCs for automotive infotainment, robotics, and other embedded applications.

Conclusion 

Nvidia’s decision to offer its NVLink Fusion interconnect technology to external chipmakers like Marvell and MediaTek marks a significant step toward democratizing high-speed chip-to-chip AI communication. Announced during CEO Jensen Huang’s keynote at Computex 2025, this move aligns with Nvidia's vision of powering the future of AI beyond graphics.

The company, now at the heart of AI hardware innovation, continues to push forward with developments such as the DGX Spark desktop AI system, and the upcoming Blackwell Ultra, Rubin, and Feynman chip series. Huang also announced a new Nvidia headquarters in Taipei, underscoring the company’s commitment to its global expansion.

With AI-driven demand surging post-ChatGPT, and Nvidia expanding its presence in both software and hardware, the company remains a cornerstone of the evolving AI landscape. As the industry braces for geopolitical shifts, Nvidia’s approach to open innovation and collaboration is poised to shape the next era of AI computing.

Podcast

TWN Exclusive