Hi! I want to run a local LLM on my PC and I’m looking for recommendations.

My PC:

CPU: Ryzen 5 3400G GPU: RTX 3060 (12GB VRAM) RAM: 24GB (2x8GB DDR4 3600MHz + 1x8GB DDR4 2666MHz)

What I want:

Free models, as “uncensored” as possible Good Portuguese performance Recent and strong overall quality Ability to fine-tune (LoRA/fine-tuning) If possible, web browsing via a tool/integration

Limit: up to 13B (4-bit).

Which models do you recommend, and which quantization/format (GGUF, GPTQ, AWQ, etc.) works best on this setup?


💬 Discussion r/LocalLLaMA (11 points, 9 commentaires)