Bazaroid

Home

❯

IA Locale et Multimodale

❯

IA / LLM

❯

How to feed any pdf, url, or video into LLaVa (and other vision langauge models)

How to feed any pdf, url, or video into LLaVa (and other vision-langauge models)

04 mai 20241 min de lecture

  • LocalLLaMA
  • reddit-chad
  • Reddit

💬 Discussion r/LocalLLaMA (36 points, 1 commentaires) 🔗 Source


Vue Graphique

Créé avec Quartz v4.5.2 © 2026

  • GitHub
  • Discord Community