mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2026-01-16 21:51:21 +01:00
This works in a 4GB card now: ``` python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20 ``` |
||
|---|---|---|
| .. | ||
| callbacks.py | ||
| chat.py | ||
| deepspeed_parameters.py | ||
| extensions.py | ||
| GPTQ_loader.py | ||
| html_generator.py | ||
| LoRA.py | ||
| models.py | ||
| RWKV.py | ||
| shared.py | ||
| text_generation.py | ||
| ui.py | ||