AI Funding Frenzy: DeepSeek Hits $45B, Kimi Raises $2B as Capital Floods the Model Race
DeepSeek: $45 Billion in Two Weeks
After the V3 model swept the internet, DeepSeek has entered the capital spotlight with force. The company is reportedly in a new funding round with major state-backed funds leading, and its valuation has doubled to $45 billion within half a month. Tencent and Alibaba are said to be considering follow-on investments.
The speed is what stands out. Most companies take quarters to double their valuation; DeepSeek did it in weeks. The company also updated its technical paper to disclose the compute infrastructure behind its models — a signal that they’re comfortable being transparent about what was previously treated as a trade secret.
Kimi: China’s Best-Funded Model Company
Moonshot AI, maker of the long-context model Kimi, secured $2 billion in a round led by Meituan Longzhu with multiple top-tier institutional investors participating. Post-money valuation crossed the $20 billion mark, making Kimi the best-funded company in China’s large model track by cumulative capital raised.
The money is earmarked for deepening its long-context technology — the ability to process extremely long documents in a single pass, which has become Kimi’s signature differentiator. In a market where most models compete on benchmark scores, Kimi picked a capability that maps directly to enterprise document processing needs.
The Enterprise Battleground
While Chinese companies raise capital, OpenAI and Anthropic are fighting for enterprise entry points. The competition has shifted from model parameters to distribution — who controls the office terminal, the developer IDE, the customer support dashboard.
This is no longer a research race. It’s a land grab for the interfaces where work actually happens, and the winners will be decided not by benchmark scores but by integration depth and switching costs.
What the Money Means
The simultaneous funding events point to a market that is pricing AI companies not on current revenue but on a future where model access is as fundamental as cloud compute. Whether those bets pay off depends on a question nobody can answer yet: how fast do enterprises actually adopt AI-native workflows versus experimenting with chatbots?