China’s Moonshot AI, which is backed by the likes of Alibaba and HongShan (formerly Sequioa China), today released a new open-source model, Kimi K2.5, which understands text, image, and video.
The company said that the model was trained on 15 trillion mixed visual and text tokens, and that’s why it is natively multimodal. It added that the models are good at coding tasks and handling agent swarms
— an orchestration where multiple agents work together. In released benchmarks, the model matches the performance of the proprietary peers and even beats them in certain tasks.
For instance, in the coding benchmark, the Kimi K2.5 outperforms Gemini 3 Pro at the SWE-Bench Verified benchmark, and scores higher than GPT 5.2 and Gemini 3 Pro on the SWE-Bench Multilingual benchmark. In video understanding, it beats GPT 5.2 and Claude Opus 4.5 on VideoMMMU (Video Massive Multi-discipline Multimodal Understanding), a benchmark that measures how a model reasons over videos.

Moonshot AI said that on the coding front, while the model can understand text well, users can also feed it images or videos and ask it to make a similar interface shown in those media files.
To let people use these coding capabilities, the company has launched an open-source coding tool called Kimi Code, which would rival Anthropic’s Claude Code or Google’s Gemini CLI. Developers can use Kimi Code through their terminals or integrate it with development software such as VSCode, Cursor, and Zed. The startup said that developers can use images and videos as input with Kimi Code.
Coding tools have gained rapid popularity and are becoming revenue drivers for AI labs. Anthropic announced in November that Claude Code had reached $1 billion in annualized recurring revenue (ARR). Earlier this month, Wired reported that by the end of 2025, the tool had added $100 million to that figure. Moonshot’s Chinese competitor, Deepseek, is set to release a new model with strong coding chops next month, according to a report by The Information.
Moonshot was founded by former Google and Meta AI researcher Yang Zhilin. The company raised $1 billion in funding in a Series B round at a $2.5 billion valuation. According to Bloomberg, the startup picked up $500 million in funding last month at $4.3 billion valuation. What’s more, the report noted that it is already seeking to raise a new round at a $5 billion valuation.
