TLDR
- Nvidia has released Nemotron 3 Nano, the first in its Nemotron 3 family of open-source AI models.
- Nemotron 3 Nano delivers 3.3x faster inference than Qwen3-30B-A3B and 2.2x faster than GPT-OSS-20B on a single H200 chip.
- The Nemotron 3 family includes three models — Nano, Super, and Ultra — all supporting up to 1 million token context length.
- All models are open-source and come with weights, training recipes, and datasets, including 2.5 trillion English tokens.
- China has launched an antitrust investigation into Nvidia, focusing on its 2020 acquisition of Mellanox.
Nvidia has introduced its new Nemotron 3 family of open-source AI models, beginning with the release of Nemotron 3 Nano. The models are designed for agentic tasks such as writing, reasoning, and programming, offering better performance at lower cost. The launch follows growing momentum from Chinese tech companies in the open-source AI space. Larger models, Nemotron 3 Super and Ultra, will be released in the first half of 2026.
Nvidia Expands Open-Source AI Portfolio
Nemotron 3 Nano delivers improved efficiency compared to earlier versions. It activates fewer parameters per forward pass while improving benchmark accuracy. Nvidia said the model supports context lengths up to one million tokens. Performance testing showed Nemotron 3 Nano surpasses GPT-OSS-20B and Qwen3-30B-A3B. It delivers 3.3 times higher inference throughput than Qwen3-30B-A3B on one H200 chip.
Additionally, It runs 2.2 times faster than GPT-OSS-20B under similar conditions. The Nemotron 3 family includes Nano, Super, and Ultra models. Each is built using a hybrid Mamba-Transformer MoE architecture. Super and Ultra will include Latent MoE, Multi-Token Prediction layers, and NVFP4 training.
Nvidia said the upcoming models will also support budget control at inference time. Releases will include weights, training recipes, and documentation. All models will be open-source with redistribution rights. Datasets include Nemotron-CC-v2.1 with 2.5 trillion English tokens. Code-related sets include Nemotron-CC-Code-v1 and GitHub-sourced data. Nvidia also shared synthetic datasets for math, STEM, and reinforcement learning tasks.
Chinese Models Gain Ground as Scrutiny Grows
Chinese open-source models continue to grow in usage across global markets. Companies such as DeepSeek, Moonshot AI, and Alibaba have released widely adopted systems. Airbnb confirmed it uses Alibaba’s Qwen model in production.
Meanwhile, Meta is shifting away from open-source development. It is now testing an internal model known as Avocado. The new system is expected to launch in early 2026. US regulators have raised concerns about Chinese AI tools. Federal and state agencies have banned their use due to security risks. These restrictions have added pressure to global competition in the AI space.
China has responded with an antitrust investigation into Nvidia. The probe focuses on Nvidia’s 2020 acquisition of Mellanox. Regulators are reviewing whether Nvidia met the original approval conditions. The case remains under review by Chinese authorities. Nvidia has not publicly commented on the investigation. No final outcome has been announced.


