Commit graph

4842 commits

Author SHA1 Message Date
oobabooga 8fcadff8d3 mtmd: Use the base64 attachment for the UI preview instead of the file 2025-08-08 20:13:54 -07:00
oobabooga 6e9de75727 Support loading chat templates from chat_template.json files 2025-08-08 19:35:09 -07:00
Katehuuh 88127f46c1
Add multimodal support (ExLlamaV3) (#7174) 2025-08-08 23:31:16 -03:00
oobabooga b391ac8eb1 Fix getting the ctx-size for EXL3/EXL2/Transformers models 2025-08-08 18:11:45 -07:00
oobabooga f1147c9926 Update llama.cpp 2025-08-06 19:32:36 -07:00
oobabooga 3e24f455c8 Fix continue for GPT-OSS (hopefully the final fix) 2025-08-06 10:18:42 -07:00
oobabooga 0c1403f2c7 Handle GPT-OSS as a special case when continuing 2025-08-06 08:05:37 -07:00
oobabooga 6ce4b353c4 Fix the GPT-OSS template 2025-08-06 07:12:39 -07:00
oobabooga 7c82d65a9d Handle GPT-OSS as a special template case 2025-08-05 18:05:09 -07:00
oobabooga fbea21a1f1 Only use enable_thinking if the template supports it 2025-08-05 17:33:27 -07:00
oobabooga bfbbfc2361 Ignore add_generation_prompt in GPT-OSS 2025-08-05 17:33:01 -07:00
oobabooga 20adc3c967 Start over new template handling (to avoid overcomplicating) 2025-08-05 16:58:45 -07:00
oobabooga 80f6abb07e Begin fixing 'Continue' with GPT-OSS 2025-08-05 16:01:19 -07:00
oobabooga e5b8d4d072 Fix a typo 2025-08-05 15:52:56 -07:00
oobabooga 701048cf33 Try to avoid breaking jinja2 parsing for older models 2025-08-05 15:51:24 -07:00
oobabooga 7d98ca6195 Make web search functional with thinking models 2025-08-05 15:44:33 -07:00
oobabooga 0e42575c57 Fix thinking block parsing for GPT-OSS under llama.cpp 2025-08-05 15:36:20 -07:00
oobabooga 498778b8ac Add a new 'Reasoning effort' UI element 2025-08-05 15:19:11 -07:00
oobabooga 6bb8212731 Fix thinking block rendering for GPT-OSS 2025-08-05 15:06:22 -07:00
oobabooga 42e3a7a5ae Update llama.cpp 2025-08-05 14:56:12 -07:00
oobabooga 5c5a4dfc14 Fix impersonate 2025-08-05 13:04:10 -07:00
oobabooga ecd16d6bf9 Automatically set skip_special_tokens to False for channel-based templates 2025-08-05 12:57:49 -07:00
oobabooga 178c3e75cc Handle templates with channels separately 2025-08-05 12:52:17 -07:00
oobabooga 9f28f53cfc Better parsing of the gpt-oss template 2025-08-05 11:56:00 -07:00
oobabooga 3b28dc1821 Don't pass torch_dtype to transformers loader, let it be autodetected 2025-08-05 11:35:53 -07:00
oobabooga 3039aeffeb Fix parsing the gpt-oss-20b template 2025-08-05 11:35:17 -07:00
oobabooga 5989043537 Transformers: Support standalone .jinja chat templates (for GPT-OSS) 2025-08-05 11:22:18 -07:00
oobabooga 02a3420a50 Bump transformers to 4.55 (adds gpt-oss support) 2025-08-05 10:09:30 -07:00
oobabooga 74230f559a Bump transformers to 4.54 2025-08-01 11:03:15 -07:00
oobabooga f08bb9a201 Handle edge case in chat history loading (closes #7155) 2025-07-24 10:34:59 -07:00
oobabooga d746484521 Handle both int and str types in grammar char processing 2025-07-23 11:52:51 -07:00
oobabooga 0c667de7a7 UI: Add a None option for the speculative decoding model (closes #7145) 2025-07-19 12:14:41 -07:00
oobabooga ccf5e3e3a7 Update exllamav3 2025-07-19 12:07:38 -07:00
oobabooga a00983b2ba Update llama.cpp 2025-07-19 12:07:20 -07:00
oobabooga 9371867238 Update exllamav2 2025-07-15 07:38:03 -07:00
oobabooga 03fb85e49a Update llama.cpp 2025-07-15 07:37:13 -07:00
oobabooga 845432b9b4 Remove the obsolete modules/relative_imports.py file 2025-07-14 21:03:18 -07:00
oobabooga 1d1b20bd77 Remove the --torch-compile option (it doesn't do anything currently) 2025-07-11 10:51:23 -07:00
oobabooga 5a8a9c22e8 Update llama.cpp 2025-07-11 09:20:27 -07:00
oobabooga 273888f218 Revert "Use eager attention by default instead of sdpa"
This reverts commit bd4881c4dc.
2025-07-10 18:56:46 -07:00
oobabooga caf69d871a Revert "Standardize margins and paddings across all chat styles"
This reverts commit 86cb5e0587.
2025-07-10 18:43:01 -07:00
oobabooga 188c7c8f2b Revert "CSS simplifications"
This reverts commit c6c1b725e9.
2025-07-10 18:42:52 -07:00
oobabooga 635e6efd18 Ignore add_bos_token in instruct prompts, let the jinja2 template decide 2025-07-10 07:14:01 -07:00
oobabooga 0f3a88057c Don't downgrade triton-windows on CUDA 12.8 2025-07-10 05:39:04 -07:00
oobabooga e523f25b9f Downgrade triton-windows to 3.2.0.post19
https://github.com/oobabooga/text-generation-webui/issues/7107#issuecomment-3057250374
2025-07-10 05:35:57 -07:00
oobabooga a7a3a0c700 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-07-09 21:07:42 -07:00
oobabooga 21e0e9f32b Add the triton-windows requirement on Windows to make transformers functional 2025-07-09 21:05:17 -07:00
dependabot[bot] d1f4622a96
Update peft requirement from ==0.15.* to ==0.16.* in /requirements/full (#7127) 2025-07-10 00:15:50 -03:00
oobabooga e015355e4a Update README 2025-07-09 20:03:53 -07:00
oobabooga bd4881c4dc Use eager attention by default instead of sdpa 2025-07-09 19:57:37 -07:00