oobabooga
|
fc23345c6d
|
Send the default input to the notebook textbox when switching 2 columns to 1 (instead of the output)
|
2025-06-17 15:03:14 -07:00 |
|
oobabooga
|
aa44e542cb
|
Revert "Safer usage of mkdir across the project"
This reverts commit 0d1597616f.
|
2025-06-17 07:11:59 -07:00 |
|
oobabooga
|
0d1597616f
|
Safer usage of mkdir across the project
|
2025-06-17 07:09:33 -07:00 |
|
oobabooga
|
66e991841a
|
Fix the character pfp not appearing when switching from instruct to chat modes
|
2025-06-16 18:45:44 -07:00 |
|
oobabooga
|
be3d371290
|
Close the big profile picture when switching to instruct mode
|
2025-06-16 18:42:17 -07:00 |
|
oobabooga
|
26eda537f0
|
Add auto-save for notebook textbox while typing
|
2025-06-16 17:48:23 -07:00 |
|
oobabooga
|
88c0204357
|
Disable start_with when generating the websearch query
|
2025-06-16 14:53:05 -07:00 |
|
oobabooga
|
faae4dc1b0
|
Autosave generated text in the Notebook tab (#7079)
|
2025-06-16 17:36:05 -03:00 |
|
oobabooga
|
de24b3bb31
|
Merge the Default and Notebook tabs into a single Notebook tab (#7078)
|
2025-06-16 13:19:29 -03:00 |
|
oobabooga
|
cac225b589
|
Small style improvements
|
2025-06-16 07:26:39 -07:00 |
|
oobabooga
|
7ba3d4425f
|
Remove the 'Send to negative prompt' button
|
2025-06-16 07:23:09 -07:00 |
|
oobabooga
|
34bf93ef47
|
Move 'Custom system message' to the Parameters tab
|
2025-06-16 07:22:14 -07:00 |
|
oobabooga
|
c9c3b716fb
|
Move character settings to a new 'Character' main tab
|
2025-06-16 07:21:25 -07:00 |
|
oobabooga
|
f77f1504f5
|
Improve the style of the Character and User tabs
|
2025-06-16 06:12:37 -07:00 |
|
oobabooga
|
bc2b0f54e9
|
Only save extensions settings on manual save
|
2025-06-15 15:53:16 -07:00 |
|
oobabooga
|
609c3ac893
|
Optimize the end of generation with llama.cpp
|
2025-06-15 08:03:27 -07:00 |
|
oobabooga
|
db7d717df7
|
Remove images and links from websearch results
This reduces noise a lot
|
2025-06-14 20:00:25 -07:00 |
|
oobabooga
|
e263dbf852
|
Improve user input truncation
|
2025-06-14 19:43:51 -07:00 |
|
oobabooga
|
09606a38d3
|
Truncate web search results to at most 8192 tokens
|
2025-06-14 19:37:32 -07:00 |
|
oobabooga
|
8e9c0287aa
|
UI: Fix edge case where gpu-layers slider maximum is incorrectly limited
|
2025-06-14 10:12:11 -07:00 |
|
oobabooga
|
d2da40b0e4
|
Remember the last selected chat for each mode/character
|
2025-06-14 08:25:00 -07:00 |
|
oobabooga
|
879fa3d8c4
|
Improve the wpp style & simplify the code
|
2025-06-14 07:14:22 -07:00 |
|
oobabooga
|
9a2353f97b
|
Better log message when the user input gets truncated
|
2025-06-13 05:44:02 -07:00 |
|
Miriam
|
f4f621b215
|
ensure estimated vram is updated when switching between different models (#7071)
|
2025-06-13 02:56:33 -03:00 |
|
oobabooga
|
f337767f36
|
Add error handling for non-llama.cpp models in portable mode
|
2025-06-12 22:17:39 -07:00 |
|
oobabooga
|
2dee3a66ff
|
Add an option to include/exclude attachments from previous messages in the chat prompt
|
2025-06-12 21:37:18 -07:00 |
|
oobabooga
|
004fd8316c
|
Minor changes
|
2025-06-11 07:49:51 -07:00 |
|
oobabooga
|
570d5b8936
|
Only save extensions on manual save
|
2025-06-11 07:39:49 -07:00 |
|
oobabooga
|
27140f3563
|
Revert "Don't save active extensions through the UI"
This reverts commit df98f4b331.
|
2025-06-11 07:25:27 -07:00 |
|
LawnMauer
|
bc921c66e5
|
Load js and css sources in UTF-8 (#7059)
|
2025-06-10 22:16:50 -03:00 |
|
oobabooga
|
75da90190f
|
Fix character dropdown sometimes disappearing in the Parameters tab
|
2025-06-10 17:34:54 -07:00 |
|
oobabooga
|
1c1fd3be46
|
Remove some log messages
|
2025-06-10 14:29:28 -07:00 |
|
oobabooga
|
3f9eb3aad1
|
Fix the preset dropdown when the default preset file is not present
|
2025-06-10 14:22:37 -07:00 |
|
oobabooga
|
18bd78f1f0
|
Make the llama.cpp prompt processing messages shorter
|
2025-06-10 14:03:25 -07:00 |
|
oobabooga
|
889153952f
|
Lint
|
2025-06-10 09:02:52 -07:00 |
|
oobabooga
|
c92eba0b0a
|
Reorganize the Parameters tab (left: preset parameters, right: everything else)
|
2025-06-09 22:05:20 -07:00 |
|
oobabooga
|
efd9c9707b
|
Fix random seeds being saved to settings.yaml
|
2025-06-09 20:57:25 -07:00 |
|
oobabooga
|
df98f4b331
|
Don't save active extensions through the UI
Prevents command-line activated extensions from becoming permanently active due to autosave.
|
2025-06-09 20:28:16 -07:00 |
|
Mykeehu
|
ec73121020
|
Fix continue/start reply with when using translation extensions (#6944)
---------
Co-authored-by: oobabooga <oobabooga4@gmail.com>
|
2025-06-10 00:17:05 -03:00 |
|
Miriam
|
1443612e72
|
check .attention.head_count if .attention.head_count_kv doesn't exist (#7048)
|
2025-06-09 23:22:01 -03:00 |
|
oobabooga
|
263b5d5557
|
Use html2text to extract the text of web searches without losing formatting
|
2025-06-09 17:55:26 -07:00 |
|
oobabooga
|
f5a5d0c0cb
|
Add the URL of web attachments to the prompt
|
2025-06-09 17:32:25 -07:00 |
|
oobabooga
|
eefbf96f6a
|
Don't save truncation_length to user_data/settings.yaml
|
2025-06-08 22:14:56 -07:00 |
|
oobabooga
|
f9a007c6a8
|
Properly filter out failed web search downloads from attachments
|
2025-06-08 19:25:23 -07:00 |
|
oobabooga
|
f3388c2ab4
|
Fix selecting next chat when deleting with active search
|
2025-06-08 18:53:04 -07:00 |
|
oobabooga
|
4a369e070a
|
Add buttons for easily deleting past chats
|
2025-06-08 18:47:48 -07:00 |
|
oobabooga
|
0b8d2d65a2
|
Minor style improvement
|
2025-06-08 18:11:27 -07:00 |
|
oobabooga
|
f81b1540ca
|
Small style improvements
|
2025-06-08 15:19:25 -07:00 |
|
oobabooga
|
eb0ab9db1d
|
Fix light/dark theme persistence across page reloads
|
2025-06-08 15:04:05 -07:00 |
|
oobabooga
|
1f1435997a
|
Don't show the new 'Restore character' button in the Chat tab
|
2025-06-08 09:37:54 -07:00 |
|
oobabooga
|
84f66484c5
|
Make it optional to paste long pasted content to an attachment
|
2025-06-08 09:31:38 -07:00 |
|
oobabooga
|
42e7864d62
|
Reorganize the Session tab
|
2025-06-08 09:21:23 -07:00 |
|
oobabooga
|
af6bb7513a
|
Add back the "Save UI defaults" button
It's useful for saving extensions settings.
|
2025-06-08 09:09:36 -07:00 |
|
oobabooga
|
1bdf11b511
|
Use the Qwen3 - Thinking preset by default
|
2025-06-07 22:23:09 -07:00 |
|
oobabooga
|
fe955cac1f
|
Small UI changes
|
2025-06-07 22:15:19 -07:00 |
|
oobabooga
|
caf9fca5f3
|
Avoid some code repetition
|
2025-06-07 22:11:35 -07:00 |
|
oobabooga
|
3650a6fd1f
|
Small UI changes
|
2025-06-07 22:02:34 -07:00 |
|
oobabooga
|
6436bf1920
|
More UI persistence: presets and characters (#7051)
|
2025-06-08 01:58:02 -03:00 |
|
oobabooga
|
35ed55d18f
|
UI persistence (#7050)
|
2025-06-07 22:46:52 -03:00 |
|
oobabooga
|
2d263f227d
|
Fix the chat input reappearing when the page is reloaded
|
2025-06-06 22:38:20 -07:00 |
|
oobabooga
|
379dd01ca7
|
Filter out failed web search downloads from attachments
|
2025-06-06 22:32:07 -07:00 |
|
oobabooga
|
f8f23b5489
|
Simplify the llama.cpp stderr filter code
|
2025-06-06 22:25:13 -07:00 |
|
oobabooga
|
45f823ddf6
|
Print \n after the llama.cpp progress bar reaches 1.0
|
2025-06-06 22:23:34 -07:00 |
|
oobabooga
|
d47c8eb956
|
Remove quotes from LLM-generated websearch query (closes #7045).
Fix by @Quiet-Joker
|
2025-06-05 06:57:59 -07:00 |
|
oobabooga
|
93b3752cdf
|
Revert "Remove the "Is typing..." yield by default"
This reverts commit b30a73016d.
|
2025-06-04 09:40:30 -07:00 |
|
oobabooga
|
b30a73016d
|
Remove the "Is typing..." yield by default
|
2025-06-02 07:49:22 -07:00 |
|
oobabooga
|
bb409c926e
|
Update only the last message during streaming + add back dynamic UI update speed (#7038)
|
2025-06-02 09:50:17 -03:00 |
|
oobabooga
|
2db7745cbd
|
Show llama.cpp prompt processing on one line instead of many lines
|
2025-06-01 22:12:24 -07:00 |
|
oobabooga
|
ad6d0218ae
|
Fix after 219f0a7731
|
2025-06-01 19:27:14 -07:00 |
|
oobabooga
|
92adceb7b5
|
UI: Fix the model downloader progress bar
|
2025-06-01 19:22:21 -07:00 |
|
oobabooga
|
9e80193008
|
Add the model name to each message's metadata
|
2025-05-31 22:41:35 -07:00 |
|
oobabooga
|
98a7508a99
|
UI: Move 'Show controls' inside the hover menu
|
2025-05-31 22:22:13 -07:00 |
|
oobabooga
|
f8d220c1e6
|
Add a tooltip to the web search checkbox
|
2025-05-31 21:22:36 -07:00 |
|
oobabooga
|
1d88456659
|
Add support for .docx attachments
|
2025-05-31 20:15:07 -07:00 |
|
oobabooga
|
219f0a7731
|
Fix exllamav3_hf models failing to unload (closes #7031)
|
2025-05-30 12:05:49 -07:00 |
|
oobabooga
|
298d4719c6
|
Multiple small style improvements
|
2025-05-30 11:32:24 -07:00 |
|
oobabooga
|
7c29879e79
|
Fix 'Start reply with' (closes #7033)
|
2025-05-30 11:17:47 -07:00 |
|
oobabooga
|
acbcc12e7b
|
Clean up
|
2025-05-29 14:11:21 -07:00 |
|
oobabooga
|
dce02732a4
|
Fix timestamp issues when editing/swiping messages
|
2025-05-29 14:08:48 -07:00 |
|
oobabooga
|
f59998d268
|
Don't limit the number of prompt characters printed with --verbose
|
2025-05-29 13:08:48 -07:00 |
|
oobabooga
|
724147ffab
|
Better detect when no model is available
|
2025-05-29 10:49:29 -07:00 |
|
oobabooga
|
faa5c82c64
|
Fix message version count not updating during regeneration streaming
|
2025-05-29 09:16:26 -07:00 |
|
Underscore
|
63234b9b6f
|
UI: Fix impersonate (#7025)
|
2025-05-29 08:22:03 -03:00 |
|
oobabooga
|
75d6cfd14d
|
Download fetched web search results in parallel
|
2025-05-28 20:36:24 -07:00 |
|
oobabooga
|
7080a02252
|
Reduce the timeout for downloading web pages
|
2025-05-28 18:15:21 -07:00 |
|
oobabooga
|
3eb0b77427
|
Improve the web search query generation
|
2025-05-28 18:14:51 -07:00 |
|
oobabooga
|
27641ac182
|
UI: Make message editing work the same for user and assistant messages
|
2025-05-28 17:23:46 -07:00 |
|
oobabooga
|
6c3590ba9a
|
Make web search attachments clickable
|
2025-05-28 05:28:15 -07:00 |
|
oobabooga
|
077bbc6b10
|
Add web search support (#7023)
|
2025-05-28 04:27:28 -03:00 |
|
oobabooga
|
1b0e2d8750
|
UI: Add a token counter to the chat tab (counts input + history)
|
2025-05-27 22:36:24 -07:00 |
|
oobabooga
|
f6ca0ee072
|
Fix regenerate sometimes not creating a new message version
|
2025-05-27 21:20:51 -07:00 |
|
Underscore
|
5028480eba
|
UI: Add footer buttons for editing messages (#7019)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2025-05-28 00:55:27 -03:00 |
|
Underscore
|
355b5f6c8b
|
UI: Add message version navigation (#6947)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2025-05-27 22:54:18 -03:00 |
|
Underscore
|
8531100109
|
Fix textbox text usage in methods (#7009)
|
2025-05-26 22:40:09 -03:00 |
|
oobabooga
|
bae1aa34aa
|
Fix loading Llama-3_3-Nemotron-Super-49B-v1 and similar models (closes #7012)
|
2025-05-25 17:19:26 -07:00 |
|
oobabooga
|
8620d6ffe7
|
Make it possible to upload multiple text files/pdfs at once
|
2025-05-20 21:34:07 -07:00 |
|
oobabooga
|
cc8a4fdcb1
|
Minor improvement to attachments prompt format
|
2025-05-20 21:31:18 -07:00 |
|
oobabooga
|
409a48d6bd
|
Add attachments support (text files, PDF documents) (#7005)
|
2025-05-21 00:36:20 -03:00 |
|
oobabooga
|
5d00574a56
|
Minor UI fixes
|
2025-05-20 16:20:49 -07:00 |
|
oobabooga
|
616ea6966d
|
Store previous reply versions on regenerate (#7004)
|
2025-05-20 12:51:28 -03:00 |
|
Daniel Dengler
|
c25a381540
|
Add a "Branch here" footer button to chat messages (#6967)
|
2025-05-20 11:07:40 -03:00 |
|
oobabooga
|
8e10f9894a
|
Add a metadata field to the chat history & add date/time to chat messages (#7003)
|
2025-05-20 10:48:46 -03:00 |
|
oobabooga
|
9ec46b8c44
|
Remove the HQQ loader (HQQ models can be loaded through Transformers)
|
2025-05-19 09:23:24 -07:00 |
|
oobabooga
|
126b3a768f
|
Revert "Dynamic Chat Message UI Update Speed (#6952)" (for now)
This reverts commit 8137eb8ef4.
|
2025-05-18 12:38:36 -07:00 |
|
oobabooga
|
2faaf18f1f
|
Add back the "Common values" to the ctx-size slider
|
2025-05-18 09:06:20 -07:00 |
|
oobabooga
|
f1ec6c8662
|
Minor label changes
|
2025-05-18 09:04:51 -07:00 |
|
oobabooga
|
61276f6a37
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-05-17 07:22:51 -07:00 |
|
oobabooga
|
4800d1d522
|
More robust VRAM calculation
|
2025-05-17 07:20:38 -07:00 |
|
mamei16
|
052c82b664
|
Fix KeyError: 'gpu_layers' when loading existing model settings (#6991)
|
2025-05-17 11:19:13 -03:00 |
|
oobabooga
|
0f77ff9670
|
UI: Use total VRAM (not free) for layers calculation when a model is loaded
|
2025-05-16 19:19:22 -07:00 |
|
oobabooga
|
c0e295dd1d
|
Remove the 'None' option from the model menu
|
2025-05-16 17:53:20 -07:00 |
|
oobabooga
|
e3bba510d4
|
UI: Only add a blank space to streaming messages in instruct mode
|
2025-05-16 17:49:17 -07:00 |
|
oobabooga
|
71fa046c17
|
Minor changes after 1c549d176b
|
2025-05-16 17:38:08 -07:00 |
|
oobabooga
|
d99fb0a22a
|
Add backward compatibility with saved n_gpu_layers values
|
2025-05-16 17:29:18 -07:00 |
|
oobabooga
|
1c549d176b
|
Fix GPU layers slider: honor saved settings and show true maximum
|
2025-05-16 17:26:13 -07:00 |
|
oobabooga
|
e4d3f4449d
|
API: Fix a regression
|
2025-05-16 13:02:27 -07:00 |
|
oobabooga
|
adb975a380
|
Prevent fractional gpu-layers in the UI
|
2025-05-16 12:52:43 -07:00 |
|
oobabooga
|
fc483650b5
|
Set the maximum gpu_layers value automatically when the model is loaded with --model
|
2025-05-16 11:58:17 -07:00 |
|
oobabooga
|
38c50087fe
|
Prevent a crash on systems without an NVIDIA GPU
|
2025-05-16 11:55:30 -07:00 |
|
oobabooga
|
253e85a519
|
Only compute VRAM/GPU layers for llama.cpp models
|
2025-05-16 10:02:30 -07:00 |
|
oobabooga
|
9ec9b1bf83
|
Auto-adjust GPU layers after model unload to utilize freed VRAM
|
2025-05-16 09:56:23 -07:00 |
|
oobabooga
|
ee7b3028ac
|
Always cache GGUF metadata calls
|
2025-05-16 09:12:36 -07:00 |
|
oobabooga
|
4925c307cf
|
Auto-adjust GPU layers on context size and cache type changes + many fixes
|
2025-05-16 09:07:38 -07:00 |
|
oobabooga
|
93e1850a2c
|
Only show the VRAM info for llama.cpp
|
2025-05-15 21:42:15 -07:00 |
|
oobabooga
|
cbf4daf1c8
|
Hide the LoRA menu in portable mode
|
2025-05-15 21:21:54 -07:00 |
|
oobabooga
|
fd61297933
|
Lint
|
2025-05-15 21:19:19 -07:00 |
|
oobabooga
|
5534d01da0
|
Estimate the VRAM for GGUF models + autoset gpu-layers (#6980)
|
2025-05-16 00:07:37 -03:00 |
|
oobabooga
|
c4a715fd1e
|
UI: Move the LoRA menu under "Other options"
|
2025-05-13 20:14:09 -07:00 |
|
oobabooga
|
035cd3e2a9
|
UI: Hide the extension install menu in portable builds
|
2025-05-13 20:09:22 -07:00 |
|
oobabooga
|
2826c60044
|
Use logger for "Output generated in ..." messages
|
2025-05-13 14:45:46 -07:00 |
|
oobabooga
|
3fa1a899ae
|
UI: Fix gpu-layers being ignored (closes #6973)
|
2025-05-13 12:07:59 -07:00 |
|
oobabooga
|
62c774bf24
|
Revert "New attempt"
This reverts commit e7ac06c169.
|
2025-05-13 06:42:25 -07:00 |
|
oobabooga
|
e7ac06c169
|
New attempt
|
2025-05-10 19:20:04 -07:00 |
|
oobabooga
|
47d4758509
|
Fix #6970
|
2025-05-10 17:46:00 -07:00 |
|
oobabooga
|
4920981b14
|
UI: Remove the typing cursor
|
2025-05-09 20:35:38 -07:00 |
|
oobabooga
|
8984e95c67
|
UI: More friendly message when no model is loaded
|
2025-05-09 07:21:05 -07:00 |
|
oobabooga
|
512bc2d0e0
|
UI: Update some labels
|
2025-05-08 23:43:55 -07:00 |
|
oobabooga
|
f8ef6e09af
|
UI: Make ctx-size a slider
|
2025-05-08 18:19:04 -07:00 |
|
oobabooga
|
9ea2a69210
|
llama.cpp: Add --no-webui to the llama-server command
|
2025-05-08 10:41:25 -07:00 |
|
oobabooga
|
1c7209a725
|
Save the chat history periodically during streaming
|
2025-05-08 09:46:43 -07:00 |
|
Jonas
|
fa960496d5
|
Tools support for OpenAI compatible API (#6827)
|
2025-05-08 12:30:27 -03:00 |
|
oobabooga
|
a2ab42d390
|
UI: Remove the exllamav2 info message
|
2025-05-08 08:00:38 -07:00 |
|
oobabooga
|
348d4860c2
|
UI: Create a "Main options" section in the Model tab
|
2025-05-08 07:58:59 -07:00 |
|
oobabooga
|
d2bae7694c
|
UI: Change the ctx-size description
|
2025-05-08 07:26:23 -07:00 |
|
oobabooga
|
b28fa86db6
|
Default --gpu-layers to 256
|
2025-05-06 17:51:55 -07:00 |
|
Downtown-Case
|
5ef564a22e
|
Fix model config loading in shared.py for Python 3.13 (#6961)
|
2025-05-06 17:03:33 -03:00 |
|
oobabooga
|
c4f36db0d8
|
llama.cpp: remove tfs (it doesn't get used)
|
2025-05-06 08:41:13 -07:00 |
|
oobabooga
|
05115e42ee
|
Set top_n_sigma before temperature by default
|
2025-05-06 08:27:21 -07:00 |
|
oobabooga
|
1927afe894
|
Fix top_n_sigma not showing for llama.cpp
|
2025-05-06 08:18:49 -07:00 |
|
oobabooga
|
d1c0154d66
|
llama.cpp: Add top_n_sigma, fix typical_p in sampler priority
|
2025-05-06 06:38:39 -07:00 |
|
mamei16
|
8137eb8ef4
|
Dynamic Chat Message UI Update Speed (#6952)
|
2025-05-05 18:05:23 -03:00 |
|
oobabooga
|
475e012ee8
|
UI: Improve the light theme colors
|
2025-05-05 06:16:29 -07:00 |
|
oobabooga
|
b817bb33fd
|
Minor fix after df7bb0db1f
|
2025-05-05 05:00:20 -07:00 |
|
oobabooga
|
f3da45f65d
|
ExLlamaV3_HF: Change max_chunk_size to 256
|
2025-05-04 20:37:15 -07:00 |
|
oobabooga
|
df7bb0db1f
|
Rename --n-gpu-layers to --gpu-layers
|
2025-05-04 20:03:55 -07:00 |
|
oobabooga
|
d0211afb3c
|
Save the chat history right after sending a message
|
2025-05-04 18:52:01 -07:00 |
|
oobabooga
|
690d693913
|
UI: Add padding to only show the last message/reply after sending a message
To avoid scrolling
|
2025-05-04 18:13:29 -07:00 |
|
oobabooga
|
7853fb1c8d
|
Optimize the Chat tab (#6948)
|
2025-05-04 18:58:37 -03:00 |
|
oobabooga
|
b7a5c7db8d
|
llama.cpp: Handle short arguments in --extra-flags
|
2025-05-04 07:14:42 -07:00 |
|
oobabooga
|
4c2e3b168b
|
llama.cpp: Add a retry mechanism when getting the logits (sometimes it fails)
|
2025-05-03 06:51:20 -07:00 |
|
oobabooga
|
ea60f14674
|
UI: Show the list of files if the user tries to download a GGUF repository
|
2025-05-03 06:06:50 -07:00 |
|
oobabooga
|
b71ef50e9d
|
UI: Add a min-height to prevent constant scrolling during chat streaming
|
2025-05-02 23:45:58 -07:00 |
|
oobabooga
|
d08acb4af9
|
UI: Rename enable_thinking -> Enable thinking
|
2025-05-02 20:50:52 -07:00 |
|
oobabooga
|
4cea720da8
|
UI: Remove the "Autoload the model" feature
|
2025-05-02 16:38:28 -07:00 |
|
oobabooga
|
905afced1c
|
Add a --portable flag to hide things in portable mode
|
2025-05-02 16:34:29 -07:00 |
|
oobabooga
|
3f26b0408b
|
Fix after 9e3867dc83
|
2025-05-02 16:17:22 -07:00 |
|
oobabooga
|
9e3867dc83
|
llama.cpp: Fix manual random seeds
|
2025-05-02 09:36:15 -07:00 |
|
oobabooga
|
b950a0c6db
|
Lint
|
2025-04-30 20:02:10 -07:00 |
|
oobabooga
|
307d13b540
|
UI: Minor label change
|
2025-04-30 18:58:14 -07:00 |
|
oobabooga
|
55283bb8f1
|
Fix CFG with ExLlamaV2_HF (closes #6937)
|
2025-04-30 18:43:45 -07:00 |
|
oobabooga
|
a6c3ec2299
|
llama.cpp: Explicitly send cache_prompt = True
|
2025-04-30 15:24:07 -07:00 |
|
oobabooga
|
195a45c6e1
|
UI: Make thinking blocks closed by default
|
2025-04-30 15:12:46 -07:00 |
|
oobabooga
|
cd5c32dc19
|
UI: Fix max_updates_second not working
|
2025-04-30 14:54:05 -07:00 |
|
oobabooga
|
b46ca01340
|
UI: Set max_updates_second to 12 by default
When the tokens/second at at ~50 and the model is a thinking model,
the markdown rendering for the streaming message becomes a CPU
bottleneck.
|
2025-04-30 14:53:15 -07:00 |
|
oobabooga
|
771d3d8ed6
|
Fix getting the llama.cpp logprobs for Qwen3-30B-A3B
|
2025-04-30 06:48:32 -07:00 |
|
oobabooga
|
1dd4aedbe1
|
Fix the streaming_llm UI checkbox not being interactive
|
2025-04-29 05:28:46 -07:00 |
|
oobabooga
|
d10bded7f8
|
UI: Add an enable_thinking option to enable/disable Qwen3 thinking
|
2025-04-28 22:37:01 -07:00 |
|
oobabooga
|
1ee0acc852
|
llama.cpp: Make --verbose print the llama-server command
|
2025-04-28 15:56:25 -07:00 |
|
oobabooga
|
15a29e99f8
|
Lint
|
2025-04-27 21:41:34 -07:00 |
|
oobabooga
|
be13f5199b
|
UI: Add an info message about how to use Speculative Decoding
|
2025-04-27 21:40:38 -07:00 |
|
oobabooga
|
c6c2855c80
|
llama.cpp: Remove the timeout while loading models (closes #6907)
|
2025-04-27 21:22:21 -07:00 |
|
oobabooga
|
ee0592473c
|
Fix ExLlamaV3_HF leaking memory (attempt)
|
2025-04-27 21:04:02 -07:00 |
|
oobabooga
|
70952553c7
|
Lint
|
2025-04-26 19:29:08 -07:00 |
|
oobabooga
|
7b80acd524
|
Fix parsing --extra-flags
|
2025-04-26 18:40:03 -07:00 |
|
oobabooga
|
943451284f
|
Fix the Notebook tab not loading its default prompt
|
2025-04-26 18:25:06 -07:00 |
|
oobabooga
|
511eb6aa94
|
Fix saving settings to settings.yaml
|
2025-04-26 18:20:00 -07:00 |
|
oobabooga
|
8b83e6f843
|
Prevent Gradio from saying 'Thank you for being a Gradio user!'
|
2025-04-26 18:14:57 -07:00 |
|
oobabooga
|
4a32e1f80c
|
UI: show draft_max for ExLlamaV2
|
2025-04-26 18:01:44 -07:00 |
|
oobabooga
|
0fe3b033d0
|
Fix parsing of --n_ctx and --max_seq_len (2nd attempt)
|
2025-04-26 17:52:21 -07:00 |
|
oobabooga
|
c4afc0421d
|
Fix parsing of --n_ctx and --max_seq_len
|
2025-04-26 17:43:53 -07:00 |
|
oobabooga
|
234aba1c50
|
llama.cpp: Simplify the prompt processing progress indicator
The progress bar was unreliable
|
2025-04-26 17:33:47 -07:00 |
|
oobabooga
|
4ff91b6588
|
Better default settings for Speculative Decoding
|
2025-04-26 17:24:40 -07:00 |
|
oobabooga
|
bc55feaf3e
|
Improve host header validation in local mode
|
2025-04-26 15:42:17 -07:00 |
|
oobabooga
|
3a207e7a57
|
Improve the --help formatting a bit
|
2025-04-26 07:31:04 -07:00 |
|
oobabooga
|
6acb0e1bee
|
Change a UI description
|
2025-04-26 05:13:08 -07:00 |
|
oobabooga
|
cbd4d967cc
|
Update a --help message
|
2025-04-26 05:09:52 -07:00 |
|
oobabooga
|
763a7011c0
|
Remove an ancient/obsolete migration check
|
2025-04-26 04:59:05 -07:00 |
|
oobabooga
|
d9de14d1f7
|
Restructure the repository (#6904)
|
2025-04-26 08:56:54 -03:00 |
|
oobabooga
|
d4017fbb6d
|
ExLlamaV3: Add kv cache quantization (#6903)
|
2025-04-25 21:32:00 -03:00 |
|
oobabooga
|
d4b1e31c49
|
Use --ctx-size to specify the context size for all loaders
Old flags are still recognized as alternatives.
|
2025-04-25 16:59:03 -07:00 |
|
oobabooga
|
faababc4ea
|
llama.cpp: Add a prompt processing progress bar
|
2025-04-25 16:42:30 -07:00 |
|
oobabooga
|
877cf44c08
|
llama.cpp: Add StreamingLLM (--streaming-llm)
|
2025-04-25 16:21:41 -07:00 |
|
oobabooga
|
d35818f4e1
|
UI: Add a collapsible thinking block to messages with <think> steps (#6902)
|
2025-04-25 18:02:02 -03:00 |
|
oobabooga
|
98f4c694b9
|
llama.cpp: Add --extra-flags parameter for passing additional flags to llama-server
|
2025-04-25 07:32:51 -07:00 |
|
oobabooga
|
5861013e68
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-04-24 20:36:20 -07:00 |
|
oobabooga
|
a90df27ff5
|
UI: Add a greeting when the chat history is empty
|
2025-04-24 20:33:40 -07:00 |
|
oobabooga
|
ae1fe87365
|
ExLlamaV2: Add speculative decoding (#6899)
|
2025-04-25 00:11:04 -03:00 |
|
Matthew Jenkins
|
8f2493cc60
|
Prevent llamacpp defaults from locking up consumer hardware (#6870)
|
2025-04-24 23:38:57 -03:00 |
|
oobabooga
|
93fd4ad25d
|
llama.cpp: Document the --device-draft syntax
|
2025-04-24 09:20:11 -07:00 |
|
oobabooga
|
f1b64df8dd
|
EXL2: add another torch.cuda.synchronize() call to prevent errors
|
2025-04-24 09:03:49 -07:00 |
|
oobabooga
|
c71a2af5ab
|
Handle CMD_FLAGS.txt in the main code (closes #6896)
|
2025-04-24 08:21:06 -07:00 |
|
oobabooga
|
bfbde73409
|
Make 'instruct' the default chat mode
|
2025-04-24 07:08:49 -07:00 |
|
oobabooga
|
e99c20bcb0
|
llama.cpp: Add speculative decoding (#6891)
|
2025-04-23 20:10:16 -03:00 |
|
oobabooga
|
9424ba17c8
|
UI: show only part 00001 of multipart GGUF models in the model menu
|
2025-04-22 19:56:42 -07:00 |
|
oobabooga
|
25cf3600aa
|
Lint
|
2025-04-22 08:04:02 -07:00 |
|
oobabooga
|
39cbb5fee0
|
Lint
|
2025-04-22 08:03:25 -07:00 |
|
oobabooga
|
008c6dd682
|
Lint
|
2025-04-22 08:02:37 -07:00 |
|
oobabooga
|
78aeabca89
|
Fix the transformers loader
|
2025-04-21 18:33:14 -07:00 |
|
oobabooga
|
8320190184
|
Fix the exllamav2_HF and exllamav3_HF loaders
|
2025-04-21 18:32:23 -07:00 |
|
oobabooga
|
15989c2ed8
|
Make llama.cpp the default loader
|
2025-04-21 16:36:35 -07:00 |
|
oobabooga
|
86c3ed3218
|
Small change to the unload_model() function
|
2025-04-20 20:00:56 -07:00 |
|
oobabooga
|
fe8e80e04a
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-04-20 19:09:27 -07:00 |
|
oobabooga
|
ff1c00bdd9
|
llama.cpp: set the random seed manually
|
2025-04-20 19:08:44 -07:00 |
|
Matthew Jenkins
|
d3e7c655e5
|
Add support for llama-cpp builds from https://github.com/ggml-org/llama.cpp (#6862)
|
2025-04-20 23:06:24 -03:00 |
|
oobabooga
|
e243424ba1
|
Fix an import
|
2025-04-20 17:51:28 -07:00 |
|
oobabooga
|
8cfd7f976b
|
Revert "Remove the old --model-menu flag"
This reverts commit 109de34e3b.
|
2025-04-20 13:35:42 -07:00 |
|
oobabooga
|
b3bf7a885d
|
Fix ExLlamaV2_HF and ExLlamaV3_HF after ae02ffc605
|
2025-04-20 11:32:48 -07:00 |
|
oobabooga
|
ae02ffc605
|
Refactor the transformers loader (#6859)
|
2025-04-20 13:33:47 -03:00 |
|
oobabooga
|
6ba0164c70
|
Lint
|
2025-04-19 17:45:21 -07:00 |
|
oobabooga
|
5ab069786b
|
llama.cpp: add back the two encode calls (they are harmless now)
|
2025-04-19 17:38:36 -07:00 |
|
oobabooga
|
b9da5c7e3a
|
Use 127.0.0.1 instead of localhost for faster llama.cpp on Windows
|
2025-04-19 17:36:04 -07:00 |
|
oobabooga
|
9c9df2063f
|
llama.cpp: fix unicode decoding (closes #6856)
|
2025-04-19 16:38:15 -07:00 |
|
oobabooga
|
ba976d1390
|
llama.cpp: avoid two 'encode' calls
|
2025-04-19 16:35:01 -07:00 |
|
oobabooga
|
ed42154c78
|
Revert "llama.cpp: close the connection immediately on 'Stop'"
This reverts commit 5fdebc554b.
|
2025-04-19 05:32:36 -07:00 |
|
oobabooga
|
5fdebc554b
|
llama.cpp: close the connection immediately on 'Stop'
|
2025-04-19 04:59:24 -07:00 |
|
oobabooga
|
6589ebeca8
|
Revert "llama.cpp: new optimization attempt"
This reverts commit e2e73ed22f.
|
2025-04-18 21:16:21 -07:00 |
|
oobabooga
|
e2e73ed22f
|
llama.cpp: new optimization attempt
|
2025-04-18 21:05:08 -07:00 |
|
oobabooga
|
e2e90af6cd
|
llama.cpp: don't include --rope-freq-base in the launch command if null
|
2025-04-18 20:51:18 -07:00 |
|
oobabooga
|
9f07a1f5d7
|
llama.cpp: new attempt at optimizing the llama-server connection
|
2025-04-18 19:30:53 -07:00 |
|
oobabooga
|
f727b4a2cc
|
llama.cpp: close the connection properly when generation is cancelled
|
2025-04-18 19:01:39 -07:00 |
|
oobabooga
|
b3342b8dd8
|
llama.cpp: optimize the llama-server connection
|
2025-04-18 18:46:36 -07:00 |
|
oobabooga
|
2002590536
|
Revert "Attempt at making the llama-server streaming more efficient."
This reverts commit 5ad080ff25.
|
2025-04-18 18:13:54 -07:00 |
|
oobabooga
|
71ae05e0a4
|
llama.cpp: Fix the sampler priority handling
|
2025-04-18 18:06:36 -07:00 |
|
oobabooga
|
5ad080ff25
|
Attempt at making the llama-server streaming more efficient.
|
2025-04-18 18:04:49 -07:00 |
|
oobabooga
|
4fabd729c9
|
Fix the API without streaming or without 'sampler_priority' (closes #6851)
|
2025-04-18 17:25:22 -07:00 |
|
oobabooga
|
5135523429
|
Fix the new llama.cpp loader failing to unload models
|
2025-04-18 17:10:26 -07:00 |
|
oobabooga
|
caa6afc88b
|
Only show 'GENERATE_PARAMS=...' in the logits endpoint if use_logits is True
|
2025-04-18 09:57:57 -07:00 |
|
oobabooga
|
d00d713ace
|
Rename get_max_context_length to get_vocabulary_size in the new llama.cpp loader
|
2025-04-18 08:14:15 -07:00 |
|
oobabooga
|
c1cc65e82e
|
Lint
|
2025-04-18 08:06:51 -07:00 |
|
oobabooga
|
d68f0fbdf7
|
Remove obsolete references to llamacpp_HF
|
2025-04-18 07:46:04 -07:00 |
|