TLDRs;
Nvidia has deepened its push into next-generation artificial intelligence infrastructure by backing SiFive in a $400 million funding round.
The investment values SiFive at approximately $3.65 billion and highlights Nvidia’s broader ambition to expand its influence beyond GPUs into the wider semiconductor ecosystem.
The round, led by Atreides Management, represents a significant step up from SiFive’s previous fundraising efforts and comes at a time when demand for AI computing power is accelerating globally. Rather than manufacturing chips, SiFive focuses on licensing CPU designs, a business model comparable to that of Arm, allowing partners to build customized silicon solutions.
For Nvidia, the move reflects a strategic alignment with emerging chip architectures that could complement its dominance in AI accelerators.
SiFive targets AI performance limits
SiFive’s expansion into AI data center processors is driven by a critical industry challenge often referred to as the “memory wall.” This bottleneck occurs when processors are forced to wait for data transfers, limiting overall performance in complex AI workloads.
In some large-scale AI tasks, such as those involving large language models, GPUs can operate far below their full capacity due to delays in memory access. SiFive aims to address this inefficiency through architectural innovations, including selective cache bypassing. This approach allows certain data loads to skip Level 1 cache, ensuring that critical control operations are not slowed down by large volumes of model data.
The result is expected to be more consistent and efficient throughput, particularly for hyperscale cloud providers running increasingly complex AI systems. By tackling this bottleneck, SiFive positions itself as a key player in improving the real-world performance of AI infrastructure.
NVLink ecosystem expands reach
Nvidia’s investment is not just financial, it also represents a strategic effort to integrate SiFive’s RISC-V-based CPU designs into its broader AI ecosystem. Central to this vision is Nvidia’s NVLink Fusion platform, which enables seamless communication between CPUs, GPUs, and other accelerator chips.
Through this platform, partners can combine custom processors with Nvidia’s interconnect technology, networking hardware, and software stack. This creates a modular yet tightly integrated system optimized for large-scale AI workloads.
By supporting SiFive, Nvidia effectively strengthens its ecosystem while allowing third-party chip designs to operate within its infrastructure. Even when external silicon is used for computation, Nvidia can still capture value through its networking, interconnect, and management technologies.
This approach has led some analysts to describe Nvidia’s model as creating a form of “infrastructure layer monetization,” where the company benefits regardless of which chips power the core computations.
Rising competition in chip architectures
The partnership also reflects a broader shift in the semiconductor industry toward more open and flexible architectures. SiFive’s foundation in RISC-V, an open-standard instruction set, offers an alternative to proprietary systems traditionally dominated by companies like Arm.
As AI workloads become more specialized, companies are increasingly exploring custom chip designs tailored to specific applications. This trend is intensifying competition not only among chipmakers but also between entire ecosystems.
While Nvidia is building a vertically integrated AI platform, rival approaches are emerging around open standards such as UALink, supported by competitors like AMD and Intel. The outcome of this competition could shape the future of AI infrastructure, influencing how data centers are built and optimized.
Ultimately, Nvidia’s investment in SiFive underscores a key industry reality: the race for AI dominance is no longer just about faster chips, but about building comprehensive ecosystems that connect hardware, software, and data at scale.


