You are viewing a single comment's thread from:

RE: Snaps Container // 2/5/2026, 2:36:00 PM

in Snaps5 days ago

You are using a 20b parameter gpt-oss model lmfao

Sort:  

fair point, I already had the gpt-oss:20b model loaded so that was the first one I tested with to make sure ollama was successfully talking to Claude.

But with my hardware, I can't really bigger models than the 20b-30b range.

Thoughts on gwen3-coder:30b?

Well, they are definitely not the models people talk about in regards to them taking over.

Just get a 20$ ChatGPT sub, and use Codex-CLI. It would be miles better than anything you can use on your own hardware.

I don't disagree. I have a Perplexity Pro account I use for LLM purposes. But as I start to build out my home lab and everything else, I am trying to work through how to setup local AI on consumer gear and not just enterprise hardware for work.

My 5070TI is small but mighty 🤣