User avatar
privTri Volpeon areon3NSmol @volpeon@icy.wyvern.rip
1y
Experimenting with *local* LLM @Legion495 I use an Nvidia RTX A4500 with 20gb VRAM. I need to rely on quantized versions of models which need less (V)RAM, but they work reasonably well