Commit graph

2124 commits

Author SHA1 Message Date
oobabooga fef2bd8630 UI: Fix the instruction template delete dialog not appearing 2026-03-17 22:52:32 -07:00
oobabooga c8bb2129ba Security: server-side file save roots, image URL SSRF protection, extension allowlist 2026-03-17 22:29:35 -07:00
oobabooga 08ff3f0f90 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2026-03-17 19:52:24 -07:00
oobabooga 7e54e7b7ae llama.cpp: Support literal flags in --extra-flags (e.g. --rpc, --jinja)
The old format is still accepted for backwards compatibility.
2026-03-17 19:47:55 -07:00
oobabooga 2a6b1fdcba Fix --extra-flags breaking short long-form-only flags like --rpc
Closes #7357
2026-03-17 18:29:15 -07:00
Alvin Tang 73a094a657
Fix file handle leaks and redundant re-read in get_model_metadata (#7422) 2026-03-17 22:06:05 -03:00
RoomWithOutRoof f0014ab01c
fix: mutable default argument in LogitsBiasProcessor (#7426) 2026-03-17 22:03:48 -03:00
oobabooga 27a6cdeec1 Fix multi-turn thinking block corruption for Kimi models 2026-03-17 11:31:55 -07:00
oobabooga 2d141b54c5 Fix several typos 2026-03-17 11:11:12 -07:00
oobabooga 249861b65d web search: Update the user agents 2026-03-17 05:41:05 -07:00
oobabooga dff8903b03 UI: Modernize the Gradio theme 2026-03-16 19:33:54 -07:00
oobabooga 238cbd5656 training: Remove arbitrary higher_rank_limit parameter 2026-03-16 16:05:43 -07:00
oobabooga 22ff5044b0 training: Organize the UI 2026-03-16 16:01:40 -07:00
oobabooga 1c89376370 training: Add gradient_checkpointing for lower VRAM by default 2026-03-16 15:23:24 -07:00
oobabooga 737ded6959 Web search: Fix SSRF validation to block all non-global IPs 2026-03-16 05:37:46 -07:00
oobabooga c0de1d176c UI: Add an incognito chat option 2026-03-15 17:57:31 -07:00
oobabooga 92d376e420 web_search: Return all results and improve URL extraction 2026-03-15 13:14:53 -07:00
oobabooga bfea49b197 Move top_p and top_k higher up in the UI and CLI help 2026-03-15 09:34:17 -07:00
oobabooga 80d0c03bab llama.cpp: Change the default --fit-target from 1024 to 512 2026-03-15 09:29:25 -07:00
oobabooga 9119ce0680 llama.cpp: Use --fit-ctx 8192 when --fit on is used
This sets the minimum acceptable context length, which by default is 4096.
2026-03-15 09:24:14 -07:00
oobabooga 5763cab3c4 Fix a crash loading the MiniMax-M2.5 jinja template 2026-03-15 07:13:26 -07:00
oobabooga f0c16813ef Remove the rope scaling parameters
Now models have 131k+ context length. The parameters can still be
passed to llama.cpp through --extra-flags.
2026-03-14 19:43:25 -07:00
oobabooga 2d3a3794c9 Add a Top-P preset, make it the new default, clean up the built-in presets 2026-03-14 19:22:12 -07:00
oobabooga b9bdbd638e Fix after 4ae2bd86e2 2026-03-14 18:18:33 -07:00
oobabooga e11425d5f8 Fix relative redirect handling in web page fetcher 2026-03-14 15:46:21 -07:00
oobabooga 4ae2bd86e2 Change the default ctx-size to 0 (auto) for llama.cpp 2026-03-14 15:30:01 -07:00
oobabooga 573617157a Optimize tool call detection
Avoids templates that don't contain a given necessary keyword
2026-03-14 12:09:41 -07:00
oobabooga d0a4993cf4 UI: Increase ctx-size slider maximum to 1M and step to 1024 2026-03-14 09:53:12 -07:00
oobabooga c908ac00d7 Replace html2text with trafilatura for better web content extraction
After this change a lot of boilerplate is removed from web pages, saving tokens on agentic loops.
2026-03-14 09:29:17 -07:00
oobabooga 8bff331893 UI: Fix tool call markup flashing before accordion appears during streaming 2026-03-14 09:26:20 -07:00
oobabooga cb08ba63dc Fix GPT-OSS channel markup leaking into UI when model skips analysis block 2026-03-14 09:08:05 -07:00
oobabooga 09a6549816 API: Stream reasoning_content separately from content in OpenAI-compatible responses 2026-03-14 06:52:40 -07:00
oobabooga accb2ef661 UI/API: Prevent tool call markup from leaking into streamed UI output (closes #7427) 2026-03-14 06:26:47 -07:00
oobabooga e8d1c66303 Clean up tool calling code 2026-03-13 18:27:01 -07:00
oobabooga 24e7e77b55 Clean up 2026-03-13 12:37:10 -07:00
oobabooga 5362bbb413 Make web_search not download the page contents, use fetch_webpage instead 2026-03-13 12:09:08 -07:00
oobabooga aab2596d29 UI: Fix multiple thinking blocks rendering as raw text in HTML generator 2026-03-13 15:47:11 -03:00
oobabooga e0a38da9f3 Improve tool call parsing for Devstral/GPT-OSS and preserve thinking across tool turns 2026-03-13 11:04:06 -03:00
oobabooga c39c187f47 UI: Improve the style of table scrollbars 2026-03-13 03:21:47 -07:00
oobabooga c094bc943c UI: Skip output extensions on intermediate tool-calling turns 2026-03-12 21:45:38 -07:00
oobabooga 85ec85e569 UI: Fix Continue while in a tool-calling loop, remove the upper limit on number of tool calls 2026-03-12 20:22:35 -07:00
oobabooga 04213dff14 Address copilot feedback 2026-03-12 19:55:20 -07:00
oobabooga 58f26a4cc7 UI: Skip redundant work in chat loop when no tools are selected 2026-03-12 19:18:55 -07:00
oobabooga 286ae475f6 UI: Clean up tool calling code 2026-03-12 22:39:38 -03:00
oobabooga a09f21b9de UI: Fix tool calling for GPT-OSS and Continue 2026-03-12 22:17:20 -03:00
oobabooga 5c02b7f603 Allow the fetch_webpage tool to return links 2026-03-12 17:08:30 -07:00
oobabooga 09d5e049d6 UI: Improve the Tools checkbox list style 2026-03-12 16:53:49 -07:00
oobabooga 4f82b71ef3 UI: Bump the ctx-size max from 131072 to 262144 (256K) 2026-03-12 14:56:35 -07:00
oobabooga bbd43d9463 UI: Correctly propagate truncation_length when ctx_size is auto 2026-03-12 14:54:05 -07:00
oobabooga 3e6bd1a310 UI: Prepend thinking tag when template appends it to prompt
Makes Qwen models have a thinking block straight away during streaming.
2026-03-12 14:30:51 -07:00