oobabooga
|
0e3def449a
|
llama.cpp: --swa-full to llama-server when streaming-llm is checked
|
2025-08-11 15:17:25 -07:00 |
|
oobabooga
|
0e88a621fd
|
UI: Better organize the right sidebar
|
2025-08-11 15:16:03 -07:00 |
|
oobabooga
|
1e3c4e8bdb
|
Update llama.cpp
|
2025-08-11 14:40:59 -07:00 |
|
oobabooga
|
765af1ba17
|
API: Improve a validation
|
2025-08-11 12:39:48 -07:00 |
|
oobabooga
|
a78ca6ffcd
|
Remove a comment
|
2025-08-11 12:33:38 -07:00 |
|
oobabooga
|
dfd9c60d80
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-08-11 12:33:27 -07:00 |
|
oobabooga
|
999471256c
|
Lint
|
2025-08-11 12:32:17 -07:00 |
|
Mykeehu
|
1ba1211ca0
|
Fix edit window and buttons in Messenger theme (#7100)
|
2025-08-11 16:13:56 -03:00 |
|
oobabooga
|
b10d525bf7
|
UI: Update a tooltip
|
2025-08-11 12:05:22 -07:00 |
|
oobabooga
|
b62c8845f3
|
mtmd: Fix /chat/completions for llama.cpp
|
2025-08-11 12:01:59 -07:00 |
|
oobabooga
|
38c0b4a1ad
|
Default ctx-size to 8192 when not found in the metadata
|
2025-08-11 07:39:53 -07:00 |
|
oobabooga
|
52d1cbbbe9
|
Fix an import
|
2025-08-11 07:38:39 -07:00 |
|
oobabooga
|
1cb800d392
|
Docs: small change
|
2025-08-11 07:37:10 -07:00 |
|
oobabooga
|
4809ddfeb8
|
Exllamav3: small sampler fixes
|
2025-08-11 07:35:22 -07:00 |
|
oobabooga
|
4d8dbbab64
|
API: Fix sampler_priority usage for ExLlamaV3
|
2025-08-11 07:26:11 -07:00 |
|
oobabooga
|
c5340533c0
|
mtmd: Add another API example
|
2025-08-10 20:39:04 -07:00 |
|
oobabooga
|
9ec310d858
|
UI: Fix the color of italic text
|
2025-08-10 07:54:21 -07:00 |
|
oobabooga
|
cc964ee579
|
mtmd: Increase the size of the UI image preview
|
2025-08-10 07:44:38 -07:00 |
|
oobabooga
|
6fbf162d71
|
Default max_tokens to 512 in the API instead of 16
|
2025-08-10 07:21:55 -07:00 |
|
oobabooga
|
1fb5807859
|
mtmd: Fix API text completion when no images are sent
|
2025-08-10 06:54:44 -07:00 |
|
oobabooga
|
0ea62d88f6
|
mtmd: Fix "continue" when an image is present
|
2025-08-09 21:47:02 -07:00 |
|
oobabooga
|
4663b1a56e
|
Update docs
|
2025-08-09 21:45:50 -07:00 |
|
oobabooga
|
2f90ac9880
|
Move the new image_utils.py file to modules/
|
2025-08-09 21:41:38 -07:00 |
|
oobabooga
|
c6b4d1e87f
|
Fix the exllamav2 loader ignoring add_bos
|
2025-08-09 21:34:35 -07:00 |
|
oobabooga
|
d86b0ec010
|
Add multimodal support (llama.cpp) (#7027)
|
2025-08-10 01:27:25 -03:00 |
|
oobabooga
|
eb16f64017
|
Update llama.cpp
|
2025-08-09 17:12:16 -07:00 |
|
oobabooga
|
a289a92b94
|
Fix exllamav3 token count
|
2025-08-09 17:10:58 -07:00 |
|
oobabooga
|
d489eb589a
|
Attempt at fixing new exllamav3 loader undefined behavior when switching conversations
|
2025-08-09 14:11:31 -07:00 |
|
oobabooga
|
a6d6bee88c
|
Change a comment
|
2025-08-09 07:51:03 -07:00 |
|
oobabooga
|
2fe79a93cc
|
mtmd: Handle another case after 3f5ec9644f
|
2025-08-09 07:50:24 -07:00 |
|
oobabooga
|
59c6138e98
|
Remove a log message
|
2025-08-09 07:32:15 -07:00 |
|
oobabooga
|
f396b82a4f
|
mtmd: Better way to detect if an EXL3 model is multimodal
|
2025-08-09 07:31:36 -07:00 |
|
oobabooga
|
fa9be444fa
|
Use ExLlamav3 instead of ExLlamav3_HF by default for EXL3 models
|
2025-08-09 07:26:59 -07:00 |
|
oobabooga
|
d9db8f63a7
|
mtmd: Simplifications
|
2025-08-09 07:25:42 -07:00 |
|
oobabooga
|
3f5ec9644f
|
mtmd: Place the image <__media__> at the top of the prompt
|
2025-08-09 07:06:07 -07:00 |
|
oobabooga
|
1168004067
|
Minor change
|
2025-08-09 07:01:55 -07:00 |
|
oobabooga
|
9e260332cc
|
Remove some unnecessary code
|
2025-08-08 21:22:47 -07:00 |
|
oobabooga
|
544c3a7c9f
|
Polish the new exllamav3 loader
|
2025-08-08 21:15:53 -07:00 |
|
oobabooga
|
8fcadff8d3
|
mtmd: Use the base64 attachment for the UI preview instead of the file
|
2025-08-08 20:13:54 -07:00 |
|
oobabooga
|
6e9de75727
|
Support loading chat templates from chat_template.json files
|
2025-08-08 19:35:09 -07:00 |
|
Katehuuh
|
88127f46c1
|
Add multimodal support (ExLlamaV3) (#7174)
|
2025-08-08 23:31:16 -03:00 |
|
oobabooga
|
b391ac8eb1
|
Fix getting the ctx-size for EXL3/EXL2/Transformers models
|
2025-08-08 18:11:45 -07:00 |
|
oobabooga
|
f1147c9926
|
Update llama.cpp
|
2025-08-06 19:32:36 -07:00 |
|
oobabooga
|
3e24f455c8
|
Fix continue for GPT-OSS (hopefully the final fix)
|
2025-08-06 10:18:42 -07:00 |
|
oobabooga
|
0c1403f2c7
|
Handle GPT-OSS as a special case when continuing
|
2025-08-06 08:05:37 -07:00 |
|
oobabooga
|
6ce4b353c4
|
Fix the GPT-OSS template
|
2025-08-06 07:12:39 -07:00 |
|
oobabooga
|
7c82d65a9d
|
Handle GPT-OSS as a special template case
|
2025-08-05 18:05:09 -07:00 |
|
oobabooga
|
fbea21a1f1
|
Only use enable_thinking if the template supports it
|
2025-08-05 17:33:27 -07:00 |
|
oobabooga
|
bfbbfc2361
|
Ignore add_generation_prompt in GPT-OSS
|
2025-08-05 17:33:01 -07:00 |
|
oobabooga
|
20adc3c967
|
Start over new template handling (to avoid overcomplicating)
|
2025-08-05 16:58:45 -07:00 |
|