- QubitBrew
- Posts
- DeepSeek's $1 Trillion Disruption
DeepSeek's $1 Trillion Disruption
ALSO: Zuckerberg's $65B Bet

Recent Insights
DeepSeek: R1 Sends Shockwaves
DeepSeek : Janus-Pro-7B Unveiled
Meta : Zuckerberg’s $65B Bet on AI
Alibaba: Qwen2.5 - 1M Updates
DeepSeek’s R1 Shockwave
Released just over a week ago, Chinese AI lab DeepSeek introduced DeepSeek-R1. This open-source reasoning model reportedly matches or surpasses OpenAI's o1 on several benchmarks. It costs just 5-10% of o1's API price for developers.
The impact is now hitting home. Wall Street reacted sharply, with Nvidia taking the biggest hit, losing nearly $600 billion in a single morning—a massive drop described as “a Stargate’s worth” by NYU’s Gary Marcus.
This event marked the largest single-day loss in U.S. stock market history, more than doubling the $279 billion market cap Nvidia lost on September 3, 2024.
As a result, Nvidia's CEO Jensen Huang experienced a staggering $21 billion drop in his net worth. Top banker Jamie Dimon had recently cautioned that the U.S. stock market was overinflated.

Source: DeepSeek
R1’s Key Details:
Innovative Approach: Unlike traditional GPT models, R1 employs a reasoning method similar to OpenAI’s o1. Although this method takes longer, it produces more reliable results in domains such as physics, science, and math.
Model Variants: The model boasts 671 billion parameters but is also available in smaller "distilled" versions, with as few as 1.5 billion parameters, capable of running locally on a laptop.
Benchmark Performance: DeepSeek-R1 outperforms o1 on several critical benchmarks, including AIME, MATH-500, and SWE-bench Verified.
Commercial Use: Available under an MIT license, the model costs significantly less than o1 ($0.14 vs. $7.5 per million input tokens).
Why It Matters: This marks a significant milestone for open-source AI, achieving parity with ChatGPT’s capabilities on key benchmarks.
Ironically, it’s not OpenAI but the Chinese company DeepSeek that is openly sharing its models and training methodologies, advancing the field.
It's worth highlighting that DeepSeek R1's distilled versions can operate entirely offline on your computer, enabling you to use them anywhere, even without an internet connection.
DeepSeek’s Janus-Pro-7B

Source: DeepSeek
In a stunning move that has sent ripples through the AI community, DeepSeek has unveiled yet another open-source AI model, Janus-Pro-7B, following the recent release of R1.
This multimodal powerhouse excels at both analyzing and generating images, and early benchmarks indicate it outperforms industry giants such as OpenAI's DALL-E 3 and Stable Diffusion.
Key Details:
Janus-Pro Model Family: The new Janus-Pro models generate high-quality images from text descriptions and are available in 1B and 7B parameter versions.
Benchmark Performance: Janus-Pro outperformed DALL-E 3 and Stable Diffusion in key industry benchmarks for image quality and accuracy, such as GenEval and DPG-Bench.
Open Source: Released under an MIT license, these models allow developers to freely use and modify them for commercial projects.
Significant Launch: This launch follows DeepSeek's R1 release, which achieved o1-level reasoning capabilities at a fraction of the cost, disrupting U.S. markets and the industry.
Why It Matters: DeepSeek is making headlines, and the impact of R1 is being felt throughout the markets. The world is now reevaluating assumptions about development costs and capabilities.
While the current panic might be an overreaction, the Chinese lab has raised important questions about the U.S.'s perceived lead in the AI space.
Meta's Bold $65B Bet on AI

source: Mark Zuckerberg on Facebook
Meta CEO Mark Zuckerberg has announced a monumental $60-65 billion capital expenditure plan for 2025, focusing on AI infrastructure.
This investment aims to position Meta AI as the premier assistant and establish Llama 4 as the leading model in the industry.
Key Details:
Compute Power: Meta plans to deploy approximately 1GW of compute power in 2025, constructing a datacenter that would cover a significant portion of Manhattan.
Hardware Deployment: The company aims to accumulate over 1.3 million GPUs by the end of the year, making it one of the largest AI hardware deployments globally.
Investment Growth: This expenditure marks a ~70% increase from 2024's projected spending, with Zuckerberg predicting that Meta AI will reach 1 billion users this year.
Contextual Background:
The announcement follows the reveals of DeepSeek R1 and OpenAI’s Stargate Project, which will inject $500 billion into U.S. AI infrastructure projects.
Why It Matters: The competition for AI infrastructure is intensifying, with Meta and OpenAI pouring substantial capital into constructing expansive new U.S. datacenters.
This investment surge continues despite DeepSeek’s R1 breakthrough, which claims to match top industry performance at lower costs, though skeptics remain.
Qwen Unveils 1M Token Model Upgrades

Source: Google
Alibaba's Qwen team has released two open-source models with 1 million token capacity, improved speed, and a revamped chat interface.
Key Details:
New Models: The Qwen2.5-1M series includes models with 7 billion and 14 billion parameters, both supporting 1 million token context lengths while maintaining high accuracy.
Speed Enhancement: Qwen utilizes a custom vLLM-inference framework, delivering processing speeds up to 7 times faster than other long-context systems.
Performance: In tests, the Qwen-1M models outperformed long-context competitors such as Llama-3, GLM-4, and GPT-4 across complex long-text tasks.
Qwen Chat v0.2: The release includes an upgrade to Qwen Chat, adding capabilities for web search, text-to-video generation, and enhanced image functions.
Why It Matters: Qwen’s open-source 1M models signal a significant shift in the industry, with Google’s Gemini (2M) and Flash 2.0 Thinking (1M) also pushing the boundaries of massive input capabilities.
These advances, coupled with faster processing, unlock superhuman levels of data analysis and enable new, complex use cases.
Thank you for reading.
Until next time, cheers!
The QubitBrew Team