Alongside the release of the Kimi K2.5 model, we are open-sourcing the Kimi Vendor Verifier (KVV) project, designed to help users of open-source models verify the accuracy of their inference implementations.
Not as an afterthought, but because we learned the hard way that open-sourcing a model is only half the battle. The other half is ensuring it runs correctly everywhere else.
(Work in Progress. Official ground truth will be published within two days of the model's open-source release. Vendor evaluations will commence once each provider completes deployment—we will begin testing immediately and update results in real time.)
From Isolated Incidents to Systemic Issues
Since the release of K2 Thinking, we have received frequent feedback from the community regarding anomalies in benchmark scores. Our investigation confirmed that a significant portion of these cases stemmed from the misuse of Decoding parameters. To mitigate this immediately, we built our first line of defense at the API level: enforcing Temperature=1.0 and TopP=0.95 in Thinking mode, with mandatory validation that thinking content is correctly passed back.
However, more subtle anomalies soon triggered our alarm. In a specific evaluation on LiveBenchmark, we observed a stark contrast between third-party API and official API. After extensive testing of various infrastructure providers, we found this difference is widespread.
This exposed a deeper problem in the open-source model ecosystem: The more open the weights are, and the more diverse the deployment channels become, the less controllable the quality becomes.
If users cannot distinguish between "model capability defects" and "engineering implementation deviations," trust in the open-source ecosystem will inevitably collapse.
Responsible Open Source: Co-construction over Control
We faced a choice: Should we restrict the model license to require official certification for commercial API sales?
After careful consideration, we rejected the path of restriction. Instead, we chose to establish an influential Vendor Verifier project. Our philosophy is "Co-construction over Control; Influence over Force." We aim to drive the community and users to prioritize accuracy through transparency and standardized testing.
Six Critical Benchmarks (selected to expose specific infra failures):
Upstream Fix: We embed with vLLM/SGLang/KTransformers communities to fix root causes, not just detect symptoms.
Pre-Release Validation: Rather than waiting for post-deployment complaints, we provide early access to test models. This lets infrastructure providers validate their stacks before users encounter issues.
Continuous Benchmarking: We will maintain a public leaderboard of vendor results. This transparency encourages vendors to prioritize accuracy.
We completed full evaluation workflow validation on Two NVIDIA H20 8-GPU servers, with sequential execution taking approximately 15 hours. To improve evaluation efficiency, scripts have been optimized for long-running inference scenarios, including streaming inference, automatic retry, and checkpoint resumption mechanisms.
Weights are open. The knowledge to run them correctly must be too.
We are expanding vendor coverage and seeking lighter agentic tests. Contact Us: [email protected]