Show HN: Can I run this LLM? (locally)
can-i-run-this-llm-blue.vercel.appOne of the most frequent questions one faces while running LLMs locally is: I have xx RAM and yy GPU, Can I run zz LLM model ? I have vibe coded a simple application to help you with just that.
Update: A lot of great feedback for me to improve the app. Thank you all.
Nice concept – but unfortunstely I found it to be incorrect in all of the examples I tried with my Mac.
It'd also need to be much more precise in hardware specs and cover a lot more models and their variants to be actually useful.
Grading the compatibilty is also an absolute requirement – it's rarely an absolute yes or no, but often a question of available GPU memory. There's a lot of other factors too which don't seem to be considered.
I found it to be incorrect in all of the examples I tried
Are you sure it's not powered by an LLM inside?
I believe it'd be more precise if it used an appropriately chosen and applied LLM in combination with web research – in contrast to juggling together some LLM generated code.
Confirmed. Nice idea but it doesn't really define"run". I can run some relatively large models compared to their choices. They just happen to be slow.
And herein lies the problem with vibe coding - accuracy is wanting.
I can absolutely run models that this site says cannot be run. Shared RAM is a thing - even with limited VRAM, shared RAM can compensate to run larger models. (Slowly, admittedly, but they work.)
New word for me: vibe coding
> coined the term in February 2025
> Vibe coding is a new coding style [...] A programmer can describe a program in words and get an AI tool to generate working code, without requiring an understanding of the code. [...] [The programmer] surrenders to the "vibes" of the AI [without reading the resulting code.] When errors arise, he simply copies them into the system without further explanation.
https://en.wikipedia.org/wiki/Vibe_coding
Austen Allred sold a group of investors on the idea that this was the future of everything.
https://www.gauntletai.com/
Also quantization and allocation strategies are a big thing for local usage. 16gb vram don't seem a lot, but you can run recent 32b model in IQ3 with their full 128k context if you allocate the kv matrix on system memory, with 15t/s and a decent prompt processing speed (just above 1000t/s on my hardware)
thanks for your feedback, there is room to show how fast or slow the model will run. I will try to update the app
yes I agree that you can run. I have personally run Ollama on a 2020 intel macbook pro. Its not a problem of vibe coding, but of the choice of logic i went with.
> Can I Run DeepSeek R1
> Yes, you can run this model! Your system has sufficient resources (16GB RAM, 12GB VRAM) to run the smaller distilled version (likely 7B parameters or less) of this model.
Last I checked DeepSeek R1 was a 671B model, not a 7B model. Was this site made with AI?
> Was this site made with AI?
OP said they "vibe coded" it, so yes.
https://en.m.wikipedia.org/wiki/Vibe_coding
Goodness. I love getting older and see the ridiculousness of the next generation.
It says “smaller distilled model” in your own quote which, generously, also implies quantized.
Here[0] are some 1.5B and 8B distilled+quantized derivatives of DeepSeek. However, I don’t find a 7B model, that seems totally made-up from whole cloth. Also, I personally wouldn’t call this 8B model “DeepSeek”.
0: https://www.reddit.com/r/LocalLLaMA/comments/1iskrsp/quantiz...
> > smaller distilled version
Not technically the full R1 model, it’s talking about the distillations where Deepseek trained Qwen and Llama models based on R1 output
Then how about DeepSeek R1 GGUF:
> Yes, you can run this model! Your system has sufficient resources (16GB RAM, 12GB VRAM) to run this model.
No mention of distillations. This was definitely either made by AI, or someone picking numbers for the models totally at random.
Ok yeah that’s just weird
lol words out of my mouth
Is it maybe because DeepSeek is a MoE and doesn't require all parameters for a given token?
That's not ideal from a token throughput perspective, but I can see min working set of weight memory gains if you can load pieces into vram for each token.
It still wouldn't fit in 16 GB memory. Further there's too much swapping going on with MoE models to move expert layers to and from gpu without bottlenecks.
This doesn't mention quantisations. Also, it says I can run R1 with 128GB of ram, but even the 1.58 bit quantisation takes 160GB.
This just isn’t right. It says I can run a 400+ parameter model on my M4 128gb. This is false, even at high quantization.
> One of the most frequent questions one faces while running LLMs locally is: I have xx RAM and yy GPU, Can I run zz LLM model ?
In my experience, LM Studio does a pretty great job of making this a non-issue. Also, whatever heuristics this site is based on are incorrect — I'm running models on a 64GB Mac Studio M1 Max that it claims I can't.
Mhmm.. AMD APU do not have a GPU but can run up to 14B models quite fast
How exactly does the tool check? Not sure it's that useful since simply estimating via the parameter count is a pretty good proxy, then using ollama to dl a model for testing works out pretty nicely.
I think I would like if it also provided benchmarks. The question I have is less can I run this model, but what is the most performant (on some metric) model I can run on my current system?
even add quantized models
- When you press the refresh button, it loads data from huggingface.co/api, doing the same request seemingly 122 times within one second or so
- When I select "no dedicated GPU" because mine isn't listed, it'll just answer the same "you need more (V)RAM" for everything I click. It might as well color those models red in the list already, or at minimum show the result without having to click "Check" after selecting everything. The UX flow isn't great
- I have 24GB RAM (8GB fixed soldered, extended with 1x16GB SO-DIMM), but that's not an option to select. Instead of using a dropdown for a number, maybe make it a numeric input field, optionally with a slider like <input type=range min=1 max=128 step=2>, or mention whether to round up or down when one has an in-between value (I presume down? I'm not into this yet, that's why I'm here / why this site sounded useful)
- I'm wondering if this website can just be a table with like three columns (model name, minimum RAM, minimum VRAM). To answer my own question, I tried checking the source code but it's obfuscated with no source map available, so not sure if this suggestion would work
- Edit2: while the tab is open, one CPU core is at 100%. That's impressive, browsers are supposed to not let a page fire code more than once per second when the tab is not in the foreground, and if it were an infinite loop then the page would hang. WTF is this doing? When I break the debugger at a random moment, it's in scheduler.production.min.js according to the comment above the place where it drops me </edit2>.
Edit: thinking about this again...
what if you flip the whole concept?
1. Put in your specs
2. It shows a list of models you can run
The list could be sorted descending by size (presuming that loosely corresponds to best quality, per my lay person understanding). At the bottom, it could show a list of models that the website is aware of but that your hardware can't run
thanks for the feedback, will check and update if there are any bugs causing multiple calls
In case it's relevant, I'm using Firefox
UX whine: Why do I have to click "Check compatibility"? After type and RAM, you instantly know all the models. Just list the compatible ones!
Are people really complaining about having to click a button now? You really expect dynamic nodejs type cruft by default?
This is where I hope HN has a downvote option. This is not erroneous to the point I want to flag, but the quality is low enough that I want to counteract the upvotes. This is akin to spam in my opinion.
It doesn't have any of my Nvidia GPUs, nor my AMD GPUs in the list and then always tells me I need more VRAM since I can't select a GPU.
thanks for the feedback I need to refresh the list or load dynamically.
128 GB of cpu memory seems to be a bit of a lower upper limit. Maybe it could be increased?
Also, my Mac with 36 GB of memory can't be selected.
I have Mac going upto 512 GB of memory since the latest mac studio lauched last week has support for 512 GB of unified memory.
Neat, I'd prefer to it just show what models I can run though, rather than saying if I can or cannot run a specific one.
thanks for the feedback thats a good idea
You forgot to define what constitutes “running”. And people have different expectations.
agree, the model assumes a multitasking setup where you need some leftover ram for other tasks. You can squeeze in much larger models when running dedicated
It would be a lot nicer if it would not just give a binary flag "can/can't run" but what to expect.
Ideal scenario (YMMV): add more hardware parameters (like chipset, CPU, actual RAM type/timings - with presets for most common setups) and extra model settings (quantization and context size come to mind) then answer like this: "you have sufficient RAM to load the model, and you should expect performance around 10 tok/sec with 3s to the first token". Or maybe rather list all models you know about and provide performance for each. Inverse search ("what rig do I need to run this model with at least this performance") would be also very cool. May be nice have an ability to parse input of common system information tools (like Windows wmic/Get-ComputerInfo, macOS system_profiler or GNU/Linux dmidecode - not sure if all info is there, but just as an rough idea: give some commands to run, parse their output in search of specs)
Of course, this would be very non-trivial to implement and you'll probably have to dig a lot for anecdotal data on how various hardware performs (hmm... maybe a good task for agentic LLM?) but that would actually make this a serious tool that people can use and link to, rather than a toy.
I have ran Deepseek R1 on my PC with 128 gigs ram Effortlessly
Cool idea. iPhone would be great too! & Function calling tools.
AI generated app to generate AI generated trash?
Great, but it should have image and video models too.
I have found the stoplight chart on https://www.canirunthisllm.net/ to be the most useful one of these types of calculators.
OP, did you even do a simple smoke test of all the options before? None of this works well.
[flagged]
no, I dont see how your post is related. maybe I am just arguing with a bot ?
You are. There are a couple accounts spamming this link today.