Bazaroid

Home

❯

IA Locale et Multimodale

❯

IA / LLM

❯

How do i run Llama3.2 3B Instruct int4 qlora eo8 in my local pc using CPU?

How do i run Llama3.2-3B-Instruct-int4-qlora-eo8 in my local pc using CPU?

25 oct. 20241 min de lecture

  • LocalLLaMA
  • reddit-chad
  • Reddit

so i installed the model from official meta website but i want to run it using code it does not have safetensor file or any other file that is required to run it how do i do that?

reference : https://ai.meta.com/blog/meta-llama-quantized-lightweight-models/


💬 Discussion r/LocalLLaMA (4 points, 1 commentaires)


Vue Graphique

Créé avec Quartz v4.5.2 © 2026

  • GitHub
  • Discord Community