Commit graph

4896 commits

Author SHA1 Message Date
oobabooga d771ca4a13 Fix web search (attempt) 2025-08-14 12:05:14 -07:00
oobabooga 73a8a737b2 docs: Improve the multimodal examples slightly 2025-08-13 18:23:18 -07:00
altoiddealer 57f6e9af5a
Set multimodal status during Model Loading (#7199) 2025-08-13 16:47:27 -03:00
oobabooga 725a8bcf60 Small docs change 2025-08-13 06:49:28 -07:00
oobabooga 331eab81f7 mtmd: Explain base64 inputs in the API docs 2025-08-13 06:46:10 -07:00
oobabooga bd05fb899e Update README 2025-08-12 14:19:18 -07:00
oobabooga 41b95e9ec3 Lint 2025-08-12 13:37:37 -07:00
oobabooga 2f979ce294 docs: Add a multimodal tutorial 2025-08-12 13:33:49 -07:00
oobabooga 7301452b41 UI: Minor info message change 2025-08-12 13:23:24 -07:00
oobabooga 8d7b88106a Revert "mtmd: Fail early if images are provided but the model doesn't support them (llama.cpp)"
This reverts commit d8fcc71616.
2025-08-12 13:20:16 -07:00
oobabooga 2f6a629393 UI: Minor improvement after 0e88a621fd 2025-08-12 08:51:01 -07:00
oobabooga 2238302b49 ExLlamaV3: Add speculative decoding 2025-08-12 08:50:45 -07:00
oobabooga 0882970a94 Update llama.cpp 2025-08-12 07:00:24 -07:00
oobabooga d8fcc71616 mtmd: Fail early if images are provided but the model doesn't support them (llama.cpp) 2025-08-11 18:02:33 -07:00
oobabooga e6447cd24a mtmd: Update the llama-server request 2025-08-11 17:42:35 -07:00
oobabooga c47e6deda2 Update README 2025-08-11 16:20:20 -07:00
oobabooga 0e3def449a llama.cpp: --swa-full to llama-server when streaming-llm is checked 2025-08-11 15:17:25 -07:00
oobabooga 0e88a621fd UI: Better organize the right sidebar 2025-08-11 15:16:03 -07:00
oobabooga 1e3c4e8bdb Update llama.cpp 2025-08-11 14:40:59 -07:00
oobabooga 765af1ba17 API: Improve a validation 2025-08-11 12:39:48 -07:00
oobabooga a78ca6ffcd Remove a comment 2025-08-11 12:33:38 -07:00
oobabooga dfd9c60d80 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-08-11 12:33:27 -07:00
oobabooga 999471256c Lint 2025-08-11 12:32:17 -07:00
Mykeehu 1ba1211ca0
Fix edit window and buttons in Messenger theme (#7100) 2025-08-11 16:13:56 -03:00
oobabooga b10d525bf7 UI: Update a tooltip 2025-08-11 12:05:22 -07:00
oobabooga b62c8845f3 mtmd: Fix /chat/completions for llama.cpp 2025-08-11 12:01:59 -07:00
oobabooga 38c0b4a1ad Default ctx-size to 8192 when not found in the metadata 2025-08-11 07:39:53 -07:00
oobabooga 52d1cbbbe9 Fix an import 2025-08-11 07:38:39 -07:00
oobabooga 1cb800d392 Docs: small change 2025-08-11 07:37:10 -07:00
oobabooga 4809ddfeb8 Exllamav3: small sampler fixes 2025-08-11 07:35:22 -07:00
oobabooga 4d8dbbab64 API: Fix sampler_priority usage for ExLlamaV3 2025-08-11 07:26:11 -07:00
oobabooga c5340533c0 mtmd: Add another API example 2025-08-10 20:39:04 -07:00
oobabooga 9ec310d858 UI: Fix the color of italic text 2025-08-10 07:54:21 -07:00
oobabooga cc964ee579 mtmd: Increase the size of the UI image preview 2025-08-10 07:44:38 -07:00
oobabooga 6fbf162d71 Default max_tokens to 512 in the API instead of 16 2025-08-10 07:21:55 -07:00
oobabooga 1fb5807859 mtmd: Fix API text completion when no images are sent 2025-08-10 06:54:44 -07:00
oobabooga 0ea62d88f6 mtmd: Fix "continue" when an image is present 2025-08-09 21:47:02 -07:00
oobabooga 4663b1a56e Update docs 2025-08-09 21:45:50 -07:00
oobabooga 2f90ac9880 Move the new image_utils.py file to modules/ 2025-08-09 21:41:38 -07:00
oobabooga c6b4d1e87f Fix the exllamav2 loader ignoring add_bos 2025-08-09 21:34:35 -07:00
oobabooga d86b0ec010
Add multimodal support (llama.cpp) (#7027) 2025-08-10 01:27:25 -03:00
oobabooga eb16f64017 Update llama.cpp 2025-08-09 17:12:16 -07:00
oobabooga a289a92b94 Fix exllamav3 token count 2025-08-09 17:10:58 -07:00
oobabooga d489eb589a Attempt at fixing new exllamav3 loader undefined behavior when switching conversations 2025-08-09 14:11:31 -07:00
oobabooga a6d6bee88c Change a comment 2025-08-09 07:51:03 -07:00
oobabooga 2fe79a93cc mtmd: Handle another case after 3f5ec9644f 2025-08-09 07:50:24 -07:00
oobabooga 59c6138e98 Remove a log message 2025-08-09 07:32:15 -07:00
oobabooga f396b82a4f mtmd: Better way to detect if an EXL3 model is multimodal 2025-08-09 07:31:36 -07:00
oobabooga fa9be444fa Use ExLlamav3 instead of ExLlamav3_HF by default for EXL3 models 2025-08-09 07:26:59 -07:00
oobabooga d9db8f63a7 mtmd: Simplifications 2025-08-09 07:25:42 -07:00