Add guard against training with llama.cpp loader

This commit is contained in:
oobabooga 2026-03-08 10:46:51 -03:00
parent 5a91b8462f
commit f6ffecfff2
2 changed files with 6 additions and 1 deletions

View file

@ -4,7 +4,7 @@ A LoRA is tied to a specific model architecture — a LoRA trained on Llama 3 8B
### Quick Start
1. Load your base model (no LoRAs loaded).
1. Load your base model with the **Transformers** loader (no LoRAs loaded).
2. Open the **Training** tab > **Train LoRA**.
3. Pick a dataset and configure parameters (see [below](#parameters)).
4. Click **Start LoRA Training** and monitor the [loss](#loss).