mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2026-01-06 16:50:11 +01:00
* add support for other model types dependent on future-peft-changes but with fallback to function now * use encoding=utf8 for training format * make shuffling optional and describe dropout a bit more * add eval_steps to control evaluation * make callbacks not depend on globals * make save steps controllable * placeholder of initial loading-existing-model support and var name cleanup * save/load parameters * last bit of cleanup * remove `gptq_bits` ref as main branch removed that setting * add higher_rank_limit option 2048 is basically unreachable due to VRAM, but i trained at 1536 with batch size = 1 on a 7B model. Note that it's in the do_train input just to save as a parameter * fix math on save_steps |
||
|---|---|---|
| .. | ||
| api.py | ||
| callbacks.py | ||
| chat.py | ||
| deepspeed_parameters.py | ||
| extensions.py | ||
| GPTQ_loader.py | ||
| html_generator.py | ||
| llama_attn_hijack.py | ||
| llamacpp_model.py | ||
| llamacpp_model_alternative.py | ||
| LoRA.py | ||
| models.py | ||
| RWKV.py | ||
| shared.py | ||
| text_generation.py | ||
| training.py | ||
| ui.py | ||