Commit graph

1912 commits

Author SHA1 Message Date
oobabooga 030ba7bfeb UI: Mention that Seed-OSS uses enable_thinking 2025-08-27 07:44:35 -07:00
oobabooga 0b4518e61c "Text generation web UI" -> "Text Generation Web UI" 2025-08-27 05:53:09 -07:00
oobabooga 02ca96fa44 Multiple fixes 2025-08-25 22:17:22 -07:00
oobabooga 6a7166fffa Add support for the Seed-OSS template 2025-08-25 19:46:48 -07:00
oobabooga 8fcb4b3102 Make bot_prefix extensions functional again 2025-08-25 19:10:46 -07:00
oobabooga 8f660aefe3 Fix chat-instruct replies leaking the bot name sometimes 2025-08-25 18:50:16 -07:00
oobabooga a531328f7e Fix the GPT-OSS stopping string 2025-08-25 18:41:58 -07:00
oobabooga 6c165d2e55 Fix the chat template 2025-08-25 18:28:43 -07:00
oobabooga b657be7381 Obtain stopping strings in chat mode 2025-08-25 18:22:08 -07:00
oobabooga ded6c41cf8 Fix impersonate for chat-instruct 2025-08-25 18:16:17 -07:00
oobabooga c1aa4590ea Code simplifications, fix impersonate 2025-08-25 18:05:40 -07:00
oobabooga b330ec3517 Simplifications 2025-08-25 17:54:15 -07:00
oobabooga 3ad5970374 Make the llama.cpp --verbose output less verbose 2025-08-25 17:43:21 -07:00
oobabooga adeca8a658 Remove changes to the jinja2 templates 2025-08-25 17:36:01 -07:00
oobabooga aad0104c1b Remove a function 2025-08-25 17:33:13 -07:00
oobabooga f919cdf881 chat.py code simplifications 2025-08-25 17:20:51 -07:00
oobabooga d08800c359 chat.py improvements 2025-08-25 17:03:37 -07:00
oobabooga 3bc48014a5 chat.py code simplifications 2025-08-25 16:48:21 -07:00
oobabooga 2478294c06 UI: Preload the instruct and chat fonts 2025-08-24 12:37:41 -07:00
oobabooga 8be798e15f llama.cpp: Fix stderr deadlock while loading some multimodal models 2025-08-24 12:20:05 -07:00
oobabooga 7fe8da8944 Minor simplification after f247c2ae62 2025-08-22 14:42:56 -07:00
oobabooga f247c2ae62 Make --model work with absolute paths, eg --model /tmp/gemma-3-270m-it-IQ4_NL.gguf 2025-08-22 11:47:33 -07:00
oobabooga 9e7b326e34 Lint 2025-08-19 06:50:40 -07:00
oobabooga 1972479610 Add the TP option to exllamav3_HF 2025-08-19 06:48:22 -07:00
oobabooga e0f5905a97 Code formatting 2025-08-19 06:34:05 -07:00
oobabooga 5b06284a8a UI: Keep ExLlamav3_HF selected if already selected for EXL3 models 2025-08-19 06:23:21 -07:00
oobabooga cbba58bef9 UI: Fix code blocks having an extra empty line 2025-08-18 15:50:09 -07:00
oobabooga 7d23a55901 Fix model unloading when switching loaders (closes #7203) 2025-08-18 09:05:47 -07:00
oobabooga 64eba9576c mtmd: Fix a bug when "include past attachments" is unchecked 2025-08-17 14:08:40 -07:00
oobabooga dbabe67e77 ExLlamaV3: Enable the --enable-tp option, add a --tp-backend option 2025-08-17 13:19:11 -07:00
oobabooga d771ca4a13 Fix web search (attempt) 2025-08-14 12:05:14 -07:00
altoiddealer 57f6e9af5a
Set multimodal status during Model Loading (#7199) 2025-08-13 16:47:27 -03:00
oobabooga 41b95e9ec3 Lint 2025-08-12 13:37:37 -07:00
oobabooga 7301452b41 UI: Minor info message change 2025-08-12 13:23:24 -07:00
oobabooga 8d7b88106a Revert "mtmd: Fail early if images are provided but the model doesn't support them (llama.cpp)"
This reverts commit d8fcc71616.
2025-08-12 13:20:16 -07:00
oobabooga 2238302b49 ExLlamaV3: Add speculative decoding 2025-08-12 08:50:45 -07:00
oobabooga d8fcc71616 mtmd: Fail early if images are provided but the model doesn't support them (llama.cpp) 2025-08-11 18:02:33 -07:00
oobabooga e6447cd24a mtmd: Update the llama-server request 2025-08-11 17:42:35 -07:00
oobabooga 0e3def449a llama.cpp: --swa-full to llama-server when streaming-llm is checked 2025-08-11 15:17:25 -07:00
oobabooga 0e88a621fd UI: Better organize the right sidebar 2025-08-11 15:16:03 -07:00
oobabooga a78ca6ffcd Remove a comment 2025-08-11 12:33:38 -07:00
oobabooga 999471256c Lint 2025-08-11 12:32:17 -07:00
oobabooga b62c8845f3 mtmd: Fix /chat/completions for llama.cpp 2025-08-11 12:01:59 -07:00
oobabooga 38c0b4a1ad Default ctx-size to 8192 when not found in the metadata 2025-08-11 07:39:53 -07:00
oobabooga 52d1cbbbe9 Fix an import 2025-08-11 07:38:39 -07:00
oobabooga 4809ddfeb8 Exllamav3: small sampler fixes 2025-08-11 07:35:22 -07:00
oobabooga 4d8dbbab64 API: Fix sampler_priority usage for ExLlamaV3 2025-08-11 07:26:11 -07:00
oobabooga 0ea62d88f6 mtmd: Fix "continue" when an image is present 2025-08-09 21:47:02 -07:00
oobabooga 2f90ac9880 Move the new image_utils.py file to modules/ 2025-08-09 21:41:38 -07:00
oobabooga c6b4d1e87f Fix the exllamav2 loader ignoring add_bos 2025-08-09 21:34:35 -07:00