Commit graph

2151 commits

Author SHA1 Message Date
oobabooga
bda95172bd Fix stopping string detection for chromadb/context-1 2026-03-28 06:09:53 -07:00
oobabooga
4cbea02ed4 Add ik_llama.cpp support via --ik flag 2026-03-26 06:54:47 -07:00
oobabooga
e154140021 Rename "truncation length" to "context length" in logs 2026-03-25 07:21:02 -07:00
oobabooga
368f37335f Fix --idle-timeout issues with encode/decode and parallel generation 2026-03-25 06:37:45 -07:00
oobabooga
d6f1485dd1 UI: Update the enable_thinking info message 2026-03-24 21:45:11 -07:00
oobabooga
807be11832 Remove obsolete models/config.yaml and related code 2026-03-24 18:48:50 -07:00
oobabooga
750502695c Fix GPT-OSS tool-calling after 9ec20d97 2026-03-24 11:39:24 -07:00
oobabooga
a7ef430b38 Revert "llama.cpp: Don't suppress llama-server logs"
This reverts commit 9488df3e48.
2026-03-23 20:22:51 -07:00
oobabooga
286bbb685d Revert "Follow-up to previous commit"
This reverts commit 1dda5e4711.
2026-03-23 20:22:46 -07:00
oobabooga
02f18a1d65 API: Add thinking block signature field, fix error codes, clean up logging 2026-03-23 07:06:38 -07:00
oobabooga
307d0c92be UI polish 2026-03-23 06:35:14 -07:00
oobabooga
9ec20d9730 Strip thinking blocks before tool-call parsing 2026-03-22 19:19:14 -07:00
Phrosty1
bde496ea5d
Fix prompt corruption when continuing with context truncation (#7439) 2026-03-22 21:48:56 -03:00
oobabooga
1dda5e4711 Follow-up to previous commit 2026-03-21 20:58:45 -07:00
oobabooga
9488df3e48 llama.cpp: Don't suppress llama-server logs 2026-03-21 20:47:26 -07:00
oobabooga
2c4f364339 Update API docs to mention Anthropic support 2026-03-21 18:38:11 -07:00
oobabooga
f2c909725e API: Use top_p=0.95 by default 2026-03-21 11:11:09 -07:00
oobabooga
0216893475 API: Add Anthropic-compatible /v1/messages endpoint 2026-03-20 20:38:55 -07:00
oobabooga
f0e3997f37 Add missing __init__.py to modules/grammar 2026-03-20 16:04:57 -03:00
oobabooga
7c79143a14 API: Fix _start_cloudflared raising after first attempt instead of exhausting retries 2026-03-20 15:03:49 -03:00
oobabooga
1a910574c3 API: Fix debug_msg truthy check for OPENEDAI_DEBUG=0 2026-03-20 14:57:01 -03:00
oobabooga
bf6fbc019d API: Move OpenAI-compatible API from extensions/openai to modules/api 2026-03-20 14:46:00 -03:00
oobabooga
2e4232e02b Minor cleanup 2026-03-20 07:20:26 -07:00
oobabooga
e0e20ab9e7 Minor cleanup across multiple modules 2026-03-19 08:02:23 -07:00
oobabooga
dde1764763 Cleanup modules/chat.py 2026-03-18 21:12:14 -07:00
oobabooga
779e7611ff Use logger.exception() instead of traceback.print_exc() for error messages 2026-03-18 20:42:20 -07:00
oobabooga
ca36bd6eb6 API: Remove leading spaces from post-reasoning content 2026-03-18 07:36:11 -07:00
oobabooga
fef2bd8630 UI: Fix the instruction template delete dialog not appearing 2026-03-17 22:52:32 -07:00
oobabooga
c8bb2129ba Security: server-side file save roots, image URL SSRF protection, extension allowlist 2026-03-17 22:29:35 -07:00
oobabooga
08ff3f0f90 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2026-03-17 19:52:24 -07:00
oobabooga
7e54e7b7ae llama.cpp: Support literal flags in --extra-flags (e.g. --rpc, --jinja)
The old format is still accepted for backwards compatibility.
2026-03-17 19:47:55 -07:00
oobabooga
2a6b1fdcba Fix --extra-flags breaking short long-form-only flags like --rpc
Closes #7357
2026-03-17 18:29:15 -07:00
Alvin Tang
73a094a657
Fix file handle leaks and redundant re-read in get_model_metadata (#7422) 2026-03-17 22:06:05 -03:00
RoomWithOutRoof
f0014ab01c
fix: mutable default argument in LogitsBiasProcessor (#7426) 2026-03-17 22:03:48 -03:00
oobabooga
27a6cdeec1 Fix multi-turn thinking block corruption for Kimi models 2026-03-17 11:31:55 -07:00
oobabooga
2d141b54c5 Fix several typos 2026-03-17 11:11:12 -07:00
oobabooga
249861b65d web search: Update the user agents 2026-03-17 05:41:05 -07:00
oobabooga
dff8903b03 UI: Modernize the Gradio theme 2026-03-16 19:33:54 -07:00
oobabooga
238cbd5656 training: Remove arbitrary higher_rank_limit parameter 2026-03-16 16:05:43 -07:00
oobabooga
22ff5044b0 training: Organize the UI 2026-03-16 16:01:40 -07:00
oobabooga
1c89376370 training: Add gradient_checkpointing for lower VRAM by default 2026-03-16 15:23:24 -07:00
oobabooga
737ded6959 Web search: Fix SSRF validation to block all non-global IPs 2026-03-16 05:37:46 -07:00
oobabooga
c0de1d176c UI: Add an incognito chat option 2026-03-15 17:57:31 -07:00
oobabooga
92d376e420 web_search: Return all results and improve URL extraction 2026-03-15 13:14:53 -07:00
oobabooga
bfea49b197 Move top_p and top_k higher up in the UI and CLI help 2026-03-15 09:34:17 -07:00
oobabooga
80d0c03bab llama.cpp: Change the default --fit-target from 1024 to 512 2026-03-15 09:29:25 -07:00
oobabooga
9119ce0680 llama.cpp: Use --fit-ctx 8192 when --fit on is used
This sets the minimum acceptable context length, which by default is 4096.
2026-03-15 09:24:14 -07:00
oobabooga
5763cab3c4 Fix a crash loading the MiniMax-M2.5 jinja template 2026-03-15 07:13:26 -07:00
oobabooga
f0c16813ef Remove the rope scaling parameters
Now models have 131k+ context length. The parameters can still be
passed to llama.cpp through --extra-flags.
2026-03-14 19:43:25 -07:00
oobabooga
2d3a3794c9 Add a Top-P preset, make it the new default, clean up the built-in presets 2026-03-14 19:22:12 -07:00