Bazaroid

Home

❯

IA Locale et Multimodale

❯

IA / LLM

❯

RTX A6000 vs RTX 6000 ADA for LLM inference, is paying 2x worth it?

RTX A6000 vs RTX 6000 ADA for LLM inference, is paying 2x worth it?

04 avr. 20241 min de lecture

  • LocalLLaMA
  • reddit-stealth
  • Reddit

💬 Discussion r/LocalLLaMA (33 points, 28 commentaires) 🔗 Source


Vue Graphique

Créé avec Quartz v4.5.2 © 2026

  • GitHub
  • Discord Community