oobabooga
b5cac2e3b2
Fix swipes and edit for tool calling in the UI
2026-03-12 01:53:37 -03:00
oobabooga
0d62038710
Add tools refresh button and _tool_turn comment
2026-03-12 01:36:07 -03:00
oobabooga
cf9ad8eafe
Initial tool-calling support in the UI
2026-03-12 01:16:19 -03:00
oobabooga
980a9d1657
UI: Minor defensive changes to autosave
2026-03-11 15:50:16 -07:00
oobabooga
bb00d96dc3
Use a new gr.DragDrop element for Sampler priority + update gradio
2026-03-11 19:35:12 -03:00
oobabooga
3304b57bdf
Add native logit_bias and logprobs support for ExLlamav3
2026-03-10 11:03:25 -03:00
oobabooga
8aeaa76365
Forward logit_bias, logprobs, and n to llama.cpp backend
...
- Forward logit_bias and logprobs natively to llama.cpp
- Support n>1 completions with seed increment for diversity
- Fix logprobs returning empty dict when not requested
2026-03-10 10:41:45 -03:00
oobabooga
6ec4ca8b10
Add missing custom_token_bans to llama.cpp and reasoning_effort to ExLlamav3
2026-03-10 09:58:00 -03:00
oobabooga
307c085d1b
Minor warning change
2026-03-09 21:44:53 -07:00
oobabooga
c604ca66de
Update the --multi-user warning
2026-03-09 21:36:04 -07:00
oobabooga
7f485274eb
Fix ExLlamaV3 EOS handling, load order, and perplexity evaluation
...
- Use config.eos_token_id_list for all EOS tokens as stop conditions
(fixes models like Llama-3 that define multiple EOS token IDs)
- Load vision/draft models before main model so autosplit accounts
for their VRAM usage
- Fix loss computation in ExLlamav3_HF: use cache across chunks so
sequences longer than 2048 tokens get correct perplexity values
2026-03-09 23:56:38 -03:00
oobabooga
39e6c997cc
Refactor to not import gradio in --nowebui mode
2026-03-09 19:29:24 -07:00
oobabooga
40f1837b42
README: Minor updates
2026-03-08 08:38:29 -07:00
oobabooga
f6ffecfff2
Add guard against training with llama.cpp loader
2026-03-08 10:47:59 -03:00
oobabooga
5a91b8462f
Remove ctx_size_draft from ExLlamav3 loader
2026-03-08 09:53:48 -03:00
oobabooga
7a8ca9f2b0
Fix passing adaptive-p to llama-server
2026-03-08 04:09:40 -07:00
oobabooga
baf4e13ff1
ExLlamav3: fix draft cache size to match main cache
2026-03-07 22:34:48 -03:00
oobabooga
6ff111d18e
ExLlamav3: handle exceptions in ConcurrentGenerator iterate loop
2026-03-07 22:05:31 -03:00
oobabooga
304510eb3d
ExLlamav3: route all generation through ConcurrentGenerator
2026-03-07 05:54:14 -08:00
oobabooga
abc699db9b
Minor UI change
2026-03-06 19:03:38 -08:00
oobabooga
7ea5513263
Handle Qwen 3.5 thinking blocks
2026-03-06 19:01:28 -08:00
oobabooga
5fa709a3f4
llama.cpp server: use port+5 offset and suppress No parser definition detected logs
2026-03-06 18:52:34 -08:00
oobabooga
1eead661c3
Portable mode: always use ../user_data if it exists
2026-03-06 18:04:48 -08:00
oobabooga
d48b53422f
Training: Optimize _peek_json_keys to avoid loading entire file into memory
2026-03-06 15:39:08 -08:00
oobabooga
5f6754c267
Fix stop button being ignored when token throttling is off
2026-03-06 17:12:34 -03:00
oobabooga
b8b4471ab5
Security: restrict file writes to user_data_dir, block extra_flags from API
2026-03-06 16:58:11 -03:00
oobabooga
d03923924a
Several small fixes
...
- Stop llama-server subprocess on model unload instead of relying on GC
- Fix tool_calls[].index being string instead of int in API responses
- Omit tool_calls key from API response when empty per OpenAI spec
- Prevent division by zero when micro_batch_size > batch_size in training
- Copy sampler_priority list before mutating in ExLlamaV3
- Normalize presence/frequency_penalty names for ExLlamaV3 sampler sorting
- Restore original chat_template after training instead of leaving it mutated
2026-03-06 16:52:13 -03:00
oobabooga
044566d42d
API: Add tool call parsing for DeepSeek, GLM, MiniMax, and Kimi models
2026-03-06 15:06:56 -03:00
oobabooga
f5acf55207
Add --chat-template-file flag to override the default instruction template for API requests
...
Matches llama.cpp's flag name. Supports .jinja, .jinja2, and .yaml files.
Priority: per-request params > --chat-template-file > model's built-in template.
2026-03-06 14:04:16 -03:00
oobabooga
93ebfa2b7e
Fix llama-server output filter for new log format
2026-03-06 02:38:13 -03:00
oobabooga
eba262d47a
Security: prevent path traversal in character/user/file save and delete
2026-03-06 02:00:10 -03:00
oobabooga
66fb79fe15
llama.cpp: Add --fit-target param
2026-03-06 01:55:48 -03:00
oobabooga
e81a47f708
Improve the API generation defaults --help message
2026-03-05 20:41:45 -08:00
oobabooga
27bcc45c18
API: Add command-line flags to override default generation parameters
2026-03-06 01:36:45 -03:00
oobabooga
8a9afcbec6
Allow extensions to skip output post-processing
2026-03-06 01:19:46 -03:00
oobabooga
ddcad3cc51
Follow-up to e2548f69: add missing paths module, fix gallery extension
2026-03-06 00:58:03 -03:00
oobabooga
8d43123f73
API: Fix function calling for Qwen, Mistral, GPT-OSS, and other models
...
The tool call response parser only handled JSON-based formats, causing
tool_calls to always be empty for models that use non-JSON formats.
Add parsers for three additional tool call formats:
- Qwen3.5: <tool_call><function=name><parameter=key>value</parameter>
- Mistral/Devstral: functionName{"arg": "value"}
- GPT-OSS: <|channel|>commentary to=functions.name<|message|>{...}
Also fix multi-turn tool conversations crashing with Jinja2
UndefinedError on tool_call_id by preserving tool_calls and
tool_call_id metadata through the chat history conversion.
2026-03-06 00:55:33 -03:00
oobabooga
e2548f69a9
Make user_data configurable: add --user-data-dir flag, auto-detect ../user_data
...
If --user-data-dir is not set, auto-detect: use ../user_data when
./user_data doesn't exist, making it easy to share user data across
portable builds by placing it one folder up.
2026-03-05 19:31:10 -08:00
oobabooga
4c406e024f
API: Speed up chat completions by ~85ms per request
2026-03-05 18:36:07 -08:00
oobabooga
249bd6eea2
UI: Update the parallel info message
2026-03-05 18:11:55 -08:00
oobabooga
f52d9336e5
TensorRT-LLM: Migrate from ModelRunner to LLM API, add concurrent API request support
2026-03-05 18:09:45 -08:00
oobabooga
9824c82cb6
API: Add parallel request support for llama.cpp and ExLlamaV3
2026-03-05 16:49:58 -08:00
oobabooga
2f08dce7b0
Remove ExLlamaV2 backend
...
- archived upstream: 7dc12af3a8
- replaced by ExLlamaV3, which has much better quantization accuracy
2026-03-05 14:02:13 -08:00
oobabooga
86d8291e58
Training: UI cleanup and better defaults
2026-03-05 11:20:55 -08:00
oobabooga
33ff3773a0
Clean up LoRA loading parameter handling
2026-03-05 16:00:13 -03:00
oobabooga
7a1fa8c9ea
Training: fix checkpoint resume and surface training errors to UI
2026-03-05 15:50:39 -03:00
oobabooga
275810c843
Training: wire up HF Trainer checkpoint resumption for full state recovery
2026-03-05 15:32:49 -03:00
oobabooga
63f28cb4a2
Training: align defaults with peft/axolotl (rank 8, alpha 16, dropout 0, cutoff 512, eos on)
2026-03-05 15:12:32 -03:00
oobabooga
33a38d7ece
Training: drop conversations exceeding cutoff length instead of truncating
2026-03-05 14:56:27 -03:00
oobabooga
c2e494963f
Training: fix silent error on model reload failure, minor cleanups
2026-03-05 14:41:44 -03:00