You can run a model that can beat 4o which was released less than 6 months ago _locally_! I know this requires a ton of hardware but OpenAI will not be the leader in 2025 I can assume. Always bet on open source (or rather somewhat more open development strategies)
The math and coding performance is what we really care about. I am paying for o1 Pro and also Sonnet, in my experience beside Sonnet being faster, it is also better at many tasks. In a few instances I got answers from o1 Pro but it's not justifying the price so I am cancelling and going back to $20/mo.
I am currently paying for Cursor, Claude, ChatGPT and v0! The productivity I am gaining from those tools are totally worth it (except for o1 Pro). But I am really hoping at some point those tools converge so I can pay less. For instance I am looking forward to VSCode Copilot improvements so I can go back to VSCode and once Claude has no limits I rather pay for one AI system.
OpenAi toppled as LLM leader by an open source / open weight company?
OpenAi has much more capital and compute than any of its competitors (especially deepseek); if that was to happen it would demonstrate that capital and compute doesn't matter as much as it is assumed ... (and it just might be the thing that pops the current ai bubble).
Until the models can host themselves theyll always need a company to make the experience good enough for typical users; OpenAI can always host open source models instead of their own and their user base will mostly stick around, especially if they can leverage their existing base into a network effect. I wouldnt be surprised it they are investing heavily into this vs pure model hosting running.
Im thinking their real challenge will be surviving Apple (once they go all in) or Google (if they can figure out how to make a good product). Or something along those lines.
well yes, locally, if you assume that someone's got about 300'000 dollars of hardware at hand... right?
as you are not paying for Gemini, may I ask why, did you try it and find it inferior?
Not the GP, but I bought a few P40s over the summer for $150 each. Last I checked they're more expensive now, but it's still cheap vram and fast enough at inference for me.
Someone pointed out on Reddit that DeekSeek v3 is 53x cheaper to inference than Claude Sonnet which it trades blows with in the benchmarks. As we saw with o3, compute cost to hit a certain benchmark score will become an important number now that we are in the era that you can throw an arbitrary amount of test time compute to hit an arbitrary benchmark number.
In the introduction of the paper it says: "Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours
for its full training. In addition, its training process is remarkably stable. Throughout the entire
training process, we did not experience any irrecoverable loss spikes or perform any rollbacks." They have indeed a very strong infra team.
Truly remarkable! Their approach to distributed inference is on an entirely new level. For the prefill stage, they utilized a deployment unit comprising 32 H800 GPUs, while the decoding stage scaled up to 320!! H800 GPUs per unit. Incorporates a multitude of sophisticated parallelization and communication overlap techniques, setting a standard that’s rarely seen in other setups.
It still fails my private physics testing question half the time, where claude 3.5 sonnet and openai o1 (both web version) most of the time passes. So I'd say close to SOTA but not quite. However given deekseek already has the r1 lite preview, and they can achieve comparable performance for much less compute (assuming the API cost of close models roughly represent the inference cost), then it's not unreasonable to believe deepseek may be close to release very good test compute scaling model that is similar to o3 high effort.
I'm using their API - the model is referenced by `deepseek-chat` and works really well. Seeing some more intelligent responses to my users inputs. Better adherence to the "spirit" of what I was trying to accomplish with the prompt. This is so exciting!
You can run a model that can beat 4o which was released less than 6 months ago _locally_! I know this requires a ton of hardware but OpenAI will not be the leader in 2025 I can assume. Always bet on open source (or rather somewhat more open development strategies)
The math and coding performance is what we really care about. I am paying for o1 Pro and also Sonnet, in my experience beside Sonnet being faster, it is also better at many tasks. In a few instances I got answers from o1 Pro but it's not justifying the price so I am cancelling and going back to $20/mo.
I am currently paying for Cursor, Claude, ChatGPT and v0! The productivity I am gaining from those tools are totally worth it (except for o1 Pro). But I am really hoping at some point those tools converge so I can pay less. For instance I am looking forward to VSCode Copilot improvements so I can go back to VSCode and once Claude has no limits I rather pay for one AI system.
OpenAi toppled as LLM leader by an open source / open weight company?
OpenAi has much more capital and compute than any of its competitors (especially deepseek); if that was to happen it would demonstrate that capital and compute doesn't matter as much as it is assumed ... (and it just might be the thing that pops the current ai bubble).
Until the models can host themselves theyll always need a company to make the experience good enough for typical users; OpenAI can always host open source models instead of their own and their user base will mostly stick around, especially if they can leverage their existing base into a network effect. I wouldnt be surprised it they are investing heavily into this vs pure model hosting running.
Im thinking their real challenge will be surviving Apple (once they go all in) or Google (if they can figure out how to make a good product). Or something along those lines.
> OpenAi has much more capital and compute than any of its competitors
Isn't openai still losing money? I don't think they own any data centers.
well yes, locally, if you assume that someone's got about 300'000 dollars of hardware at hand... right? as you are not paying for Gemini, may I ask why, did you try it and find it inferior?
I bought two (relatively) old datacenter GPUs with 48gb VRAM total for €200 that gets me 7 token/s for a 70b model.
which GPUs?
Not the GP, but I bought a few P40s over the summer for $150 each. Last I checked they're more expensive now, but it's still cheap vram and fast enough at inference for me.
You actually can't pay for the latest models, they're only available as free with limits
Gemini for coding does not work for me. It gets so many things wrong
You should try again. Gemini rates highest on coding at lmarena.
Someone pointed out on Reddit that DeekSeek v3 is 53x cheaper to inference than Claude Sonnet which it trades blows with in the benchmarks. As we saw with o3, compute cost to hit a certain benchmark score will become an important number now that we are in the era that you can throw an arbitrary amount of test time compute to hit an arbitrary benchmark number.
https://old.reddit.com/r/LocalLLaMA/comments/1hmm8v9/psa_dee...
How is this not on the front page. It's a remarkable release.
Noticed the same thing. DeepSeek-V3 is remarkable (beats 4o/ claude), but it's not on the front page.
It seems they don't want china to win haha
In the introduction of the paper it says: "Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training. In addition, its training process is remarkably stable. Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks." They have indeed a very strong infra team.
Truly remarkable! Their approach to distributed inference is on an entirely new level. For the prefill stage, they utilized a deployment unit comprising 32 H800 GPUs, while the decoding stage scaled up to 320!! H800 GPUs per unit. Incorporates a multitude of sophisticated parallelization and communication overlap techniques, setting a standard that’s rarely seen in other setups.
[0] https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSee...
Pricing per million tokens:
It still fails my private physics testing question half the time, where claude 3.5 sonnet and openai o1 (both web version) most of the time passes. So I'd say close to SOTA but not quite. However given deekseek already has the r1 lite preview, and they can achieve comparable performance for much less compute (assuming the API cost of close models roughly represent the inference cost), then it's not unreasonable to believe deepseek may be close to release very good test compute scaling model that is similar to o3 high effort.
What is the DeepSeek team? Who is making this?
From @kevinsxu on twitter:
Some interesting facts about DeepSeek:
- never received/sought outside funding (thus far)
- self-funded out of a hedge fund (called High-Flyer)
- entire AI team is reportedly recruited from within China, no one who's worked at a foreign company
- founder is classmates with the founder of DJI, both studied at Zhejiang University
Already available at OpenRouter: https://openrouter.ai/deepseek/deepseek-chat
Cost / million tokens: Input $0.14 Output $0.28
For comparison, Claude 3.5 Sonnet (my favorite model for coding tasks) is: Input $3 Output $15.
The benchmark results seem unrealistically good, but I'm not sure from which angles I should challenge them.
I think they're real. The model is performing better than claude-3-5-sonnet-20241022 on the claude leaderboard:
https://aider.chat/docs/leaderboards/
> a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
What kind of hardware do you need to run this?
8x H200s recommended:
https://github.com/sgl-project/sglang/tree/main/benchmark/de...
They discuss it in the paper and recommend 32 GPUs (H800 in their case) for prefill stage and 320 GPUs for decoding.
=)
I'm using their API - the model is referenced by `deepseek-chat` and works really well. Seeing some more intelligent responses to my users inputs. Better adherence to the "spirit" of what I was trying to accomplish with the prompt. This is so exciting!
Take note of their suggested temperatures! https://api-docs.deepseek.com/quick_start/parameter_settings
The results look quite promising.i will give this a try...