mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2026-03-07 06:03:51 +01:00
- Use peft's "all-linear" for target modules instead of the old model_to_lora_modules mapping (only knew ~39 model types) - Add "Target all linear layers" checkbox, on by default - Fix labels in tokenize() — were [1]s instead of actual token IDs - Replace DataCollatorForLanguageModeling with custom collate_fn - Raw text: concatenate-and-split instead of overlapping chunks - Adapter backup/loading: check safetensors before bin - Fix report_to=None crash on transformers 5.x - Fix no_cuda deprecation for transformers 5.x (use use_cpu) - Move torch.compile before Trainer init - Add remove_unused_columns=False (torch.compile breaks column detection) - Guard against no target modules selected - Set tracked.did_save so we don't always save twice - pad_token_id: fall back to eos_token_id instead of hardcoding 0 - Drop MODEL_CLASSES, split_chunks, cut_chunk_for_newline - Update docs |
||
|---|---|---|
| .. | ||
| 01 - Chat Tab.md | ||
| 02 - Default and Notebook Tabs.md | ||
| 03 - Parameters Tab.md | ||
| 04 - Model Tab.md | ||
| 05 - Training Tab.md | ||
| 06 - Session Tab.md | ||
| 07 - Extensions.md | ||
| 08 - Additional Tips.md | ||
| 09 - Docker.md | ||
| 11 - AMD Setup.md | ||
| 12 - OpenAI API.md | ||
| 13 - Keyboard Shortcuts.md | ||
| Image Generation Tutorial.md | ||
| Multimodal Tutorial.md | ||
| README.md | ||
| What Works.md | ||
These files are a mirror of the documentation at:
https://github.com/oobabooga/text-generation-webui/wiki
It is recommended to browse it there. Contributions can be sent here and will later be synced with the wiki.