mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2026-04-06 07:03:37 +00:00
Add guard against training with llama.cpp loader
This commit is contained in:
parent
5a91b8462f
commit
f6ffecfff2
2 changed files with 6 additions and 1 deletions
|
|
@ -4,7 +4,7 @@ A LoRA is tied to a specific model architecture — a LoRA trained on Llama 3 8B
|
|||
|
||||
### Quick Start
|
||||
|
||||
1. Load your base model (no LoRAs loaded).
|
||||
1. Load your base model with the **Transformers** loader (no LoRAs loaded).
|
||||
2. Open the **Training** tab > **Train LoRA**.
|
||||
3. Pick a dataset and configure parameters (see [below](#parameters)).
|
||||
4. Click **Start LoRA Training** and monitor the [loss](#loss).
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue