oobabooga
|
84f66484c5
|
Make it optional to paste long pasted content to an attachment
|
2025-06-08 09:31:38 -07:00 |
|
oobabooga
|
42e7864d62
|
Reorganize the Session tab
|
2025-06-08 09:21:23 -07:00 |
|
oobabooga
|
af6bb7513a
|
Add back the "Save UI defaults" button
It's useful for saving extensions settings.
|
2025-06-08 09:09:36 -07:00 |
|
oobabooga
|
1bdf11b511
|
Use the Qwen3 - Thinking preset by default
|
2025-06-07 22:23:09 -07:00 |
|
oobabooga
|
fe955cac1f
|
Small UI changes
|
2025-06-07 22:15:19 -07:00 |
|
oobabooga
|
caf9fca5f3
|
Avoid some code repetition
|
2025-06-07 22:11:35 -07:00 |
|
oobabooga
|
3650a6fd1f
|
Small UI changes
|
2025-06-07 22:02:34 -07:00 |
|
oobabooga
|
6436bf1920
|
More UI persistence: presets and characters (#7051)
|
2025-06-08 01:58:02 -03:00 |
|
oobabooga
|
35ed55d18f
|
UI persistence (#7050)
|
2025-06-07 22:46:52 -03:00 |
|
oobabooga
|
2d263f227d
|
Fix the chat input reappearing when the page is reloaded
|
2025-06-06 22:38:20 -07:00 |
|
oobabooga
|
379dd01ca7
|
Filter out failed web search downloads from attachments
|
2025-06-06 22:32:07 -07:00 |
|
oobabooga
|
f8f23b5489
|
Simplify the llama.cpp stderr filter code
|
2025-06-06 22:25:13 -07:00 |
|
oobabooga
|
45f823ddf6
|
Print \n after the llama.cpp progress bar reaches 1.0
|
2025-06-06 22:23:34 -07:00 |
|
oobabooga
|
d47c8eb956
|
Remove quotes from LLM-generated websearch query (closes #7045).
Fix by @Quiet-Joker
|
2025-06-05 06:57:59 -07:00 |
|
oobabooga
|
93b3752cdf
|
Revert "Remove the "Is typing..." yield by default"
This reverts commit b30a73016d.
|
2025-06-04 09:40:30 -07:00 |
|
oobabooga
|
b30a73016d
|
Remove the "Is typing..." yield by default
|
2025-06-02 07:49:22 -07:00 |
|
oobabooga
|
bb409c926e
|
Update only the last message during streaming + add back dynamic UI update speed (#7038)
|
2025-06-02 09:50:17 -03:00 |
|
oobabooga
|
2db7745cbd
|
Show llama.cpp prompt processing on one line instead of many lines
|
2025-06-01 22:12:24 -07:00 |
|
oobabooga
|
ad6d0218ae
|
Fix after 219f0a7731
|
2025-06-01 19:27:14 -07:00 |
|
oobabooga
|
92adceb7b5
|
UI: Fix the model downloader progress bar
|
2025-06-01 19:22:21 -07:00 |
|
oobabooga
|
9e80193008
|
Add the model name to each message's metadata
|
2025-05-31 22:41:35 -07:00 |
|
oobabooga
|
98a7508a99
|
UI: Move 'Show controls' inside the hover menu
|
2025-05-31 22:22:13 -07:00 |
|
oobabooga
|
f8d220c1e6
|
Add a tooltip to the web search checkbox
|
2025-05-31 21:22:36 -07:00 |
|
oobabooga
|
1d88456659
|
Add support for .docx attachments
|
2025-05-31 20:15:07 -07:00 |
|
oobabooga
|
219f0a7731
|
Fix exllamav3_hf models failing to unload (closes #7031)
|
2025-05-30 12:05:49 -07:00 |
|
oobabooga
|
298d4719c6
|
Multiple small style improvements
|
2025-05-30 11:32:24 -07:00 |
|
oobabooga
|
7c29879e79
|
Fix 'Start reply with' (closes #7033)
|
2025-05-30 11:17:47 -07:00 |
|
oobabooga
|
acbcc12e7b
|
Clean up
|
2025-05-29 14:11:21 -07:00 |
|
oobabooga
|
dce02732a4
|
Fix timestamp issues when editing/swiping messages
|
2025-05-29 14:08:48 -07:00 |
|
oobabooga
|
f59998d268
|
Don't limit the number of prompt characters printed with --verbose
|
2025-05-29 13:08:48 -07:00 |
|
oobabooga
|
724147ffab
|
Better detect when no model is available
|
2025-05-29 10:49:29 -07:00 |
|
oobabooga
|
faa5c82c64
|
Fix message version count not updating during regeneration streaming
|
2025-05-29 09:16:26 -07:00 |
|
Underscore
|
63234b9b6f
|
UI: Fix impersonate (#7025)
|
2025-05-29 08:22:03 -03:00 |
|
oobabooga
|
75d6cfd14d
|
Download fetched web search results in parallel
|
2025-05-28 20:36:24 -07:00 |
|
oobabooga
|
7080a02252
|
Reduce the timeout for downloading web pages
|
2025-05-28 18:15:21 -07:00 |
|
oobabooga
|
3eb0b77427
|
Improve the web search query generation
|
2025-05-28 18:14:51 -07:00 |
|
oobabooga
|
27641ac182
|
UI: Make message editing work the same for user and assistant messages
|
2025-05-28 17:23:46 -07:00 |
|
oobabooga
|
6c3590ba9a
|
Make web search attachments clickable
|
2025-05-28 05:28:15 -07:00 |
|
oobabooga
|
077bbc6b10
|
Add web search support (#7023)
|
2025-05-28 04:27:28 -03:00 |
|
oobabooga
|
1b0e2d8750
|
UI: Add a token counter to the chat tab (counts input + history)
|
2025-05-27 22:36:24 -07:00 |
|
oobabooga
|
f6ca0ee072
|
Fix regenerate sometimes not creating a new message version
|
2025-05-27 21:20:51 -07:00 |
|
Underscore
|
5028480eba
|
UI: Add footer buttons for editing messages (#7019)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2025-05-28 00:55:27 -03:00 |
|
Underscore
|
355b5f6c8b
|
UI: Add message version navigation (#6947)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2025-05-27 22:54:18 -03:00 |
|
Underscore
|
8531100109
|
Fix textbox text usage in methods (#7009)
|
2025-05-26 22:40:09 -03:00 |
|
oobabooga
|
bae1aa34aa
|
Fix loading Llama-3_3-Nemotron-Super-49B-v1 and similar models (closes #7012)
|
2025-05-25 17:19:26 -07:00 |
|
oobabooga
|
8620d6ffe7
|
Make it possible to upload multiple text files/pdfs at once
|
2025-05-20 21:34:07 -07:00 |
|
oobabooga
|
cc8a4fdcb1
|
Minor improvement to attachments prompt format
|
2025-05-20 21:31:18 -07:00 |
|
oobabooga
|
409a48d6bd
|
Add attachments support (text files, PDF documents) (#7005)
|
2025-05-21 00:36:20 -03:00 |
|
oobabooga
|
5d00574a56
|
Minor UI fixes
|
2025-05-20 16:20:49 -07:00 |
|
oobabooga
|
616ea6966d
|
Store previous reply versions on regenerate (#7004)
|
2025-05-20 12:51:28 -03:00 |
|
Daniel Dengler
|
c25a381540
|
Add a "Branch here" footer button to chat messages (#6967)
|
2025-05-20 11:07:40 -03:00 |
|
oobabooga
|
8e10f9894a
|
Add a metadata field to the chat history & add date/time to chat messages (#7003)
|
2025-05-20 10:48:46 -03:00 |
|
oobabooga
|
9ec46b8c44
|
Remove the HQQ loader (HQQ models can be loaded through Transformers)
|
2025-05-19 09:23:24 -07:00 |
|
oobabooga
|
126b3a768f
|
Revert "Dynamic Chat Message UI Update Speed (#6952)" (for now)
This reverts commit 8137eb8ef4.
|
2025-05-18 12:38:36 -07:00 |
|
oobabooga
|
2faaf18f1f
|
Add back the "Common values" to the ctx-size slider
|
2025-05-18 09:06:20 -07:00 |
|
oobabooga
|
f1ec6c8662
|
Minor label changes
|
2025-05-18 09:04:51 -07:00 |
|
oobabooga
|
61276f6a37
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-05-17 07:22:51 -07:00 |
|
oobabooga
|
4800d1d522
|
More robust VRAM calculation
|
2025-05-17 07:20:38 -07:00 |
|
mamei16
|
052c82b664
|
Fix KeyError: 'gpu_layers' when loading existing model settings (#6991)
|
2025-05-17 11:19:13 -03:00 |
|
oobabooga
|
0f77ff9670
|
UI: Use total VRAM (not free) for layers calculation when a model is loaded
|
2025-05-16 19:19:22 -07:00 |
|
oobabooga
|
c0e295dd1d
|
Remove the 'None' option from the model menu
|
2025-05-16 17:53:20 -07:00 |
|
oobabooga
|
e3bba510d4
|
UI: Only add a blank space to streaming messages in instruct mode
|
2025-05-16 17:49:17 -07:00 |
|
oobabooga
|
71fa046c17
|
Minor changes after 1c549d176b
|
2025-05-16 17:38:08 -07:00 |
|
oobabooga
|
d99fb0a22a
|
Add backward compatibility with saved n_gpu_layers values
|
2025-05-16 17:29:18 -07:00 |
|
oobabooga
|
1c549d176b
|
Fix GPU layers slider: honor saved settings and show true maximum
|
2025-05-16 17:26:13 -07:00 |
|
oobabooga
|
e4d3f4449d
|
API: Fix a regression
|
2025-05-16 13:02:27 -07:00 |
|
oobabooga
|
adb975a380
|
Prevent fractional gpu-layers in the UI
|
2025-05-16 12:52:43 -07:00 |
|
oobabooga
|
fc483650b5
|
Set the maximum gpu_layers value automatically when the model is loaded with --model
|
2025-05-16 11:58:17 -07:00 |
|
oobabooga
|
38c50087fe
|
Prevent a crash on systems without an NVIDIA GPU
|
2025-05-16 11:55:30 -07:00 |
|
oobabooga
|
253e85a519
|
Only compute VRAM/GPU layers for llama.cpp models
|
2025-05-16 10:02:30 -07:00 |
|
oobabooga
|
9ec9b1bf83
|
Auto-adjust GPU layers after model unload to utilize freed VRAM
|
2025-05-16 09:56:23 -07:00 |
|
oobabooga
|
ee7b3028ac
|
Always cache GGUF metadata calls
|
2025-05-16 09:12:36 -07:00 |
|
oobabooga
|
4925c307cf
|
Auto-adjust GPU layers on context size and cache type changes + many fixes
|
2025-05-16 09:07:38 -07:00 |
|
oobabooga
|
93e1850a2c
|
Only show the VRAM info for llama.cpp
|
2025-05-15 21:42:15 -07:00 |
|
oobabooga
|
cbf4daf1c8
|
Hide the LoRA menu in portable mode
|
2025-05-15 21:21:54 -07:00 |
|
oobabooga
|
fd61297933
|
Lint
|
2025-05-15 21:19:19 -07:00 |
|
oobabooga
|
5534d01da0
|
Estimate the VRAM for GGUF models + autoset gpu-layers (#6980)
|
2025-05-16 00:07:37 -03:00 |
|
oobabooga
|
c4a715fd1e
|
UI: Move the LoRA menu under "Other options"
|
2025-05-13 20:14:09 -07:00 |
|
oobabooga
|
035cd3e2a9
|
UI: Hide the extension install menu in portable builds
|
2025-05-13 20:09:22 -07:00 |
|
oobabooga
|
2826c60044
|
Use logger for "Output generated in ..." messages
|
2025-05-13 14:45:46 -07:00 |
|
oobabooga
|
3fa1a899ae
|
UI: Fix gpu-layers being ignored (closes #6973)
|
2025-05-13 12:07:59 -07:00 |
|
oobabooga
|
62c774bf24
|
Revert "New attempt"
This reverts commit e7ac06c169.
|
2025-05-13 06:42:25 -07:00 |
|
oobabooga
|
e7ac06c169
|
New attempt
|
2025-05-10 19:20:04 -07:00 |
|
oobabooga
|
47d4758509
|
Fix #6970
|
2025-05-10 17:46:00 -07:00 |
|
oobabooga
|
4920981b14
|
UI: Remove the typing cursor
|
2025-05-09 20:35:38 -07:00 |
|
oobabooga
|
8984e95c67
|
UI: More friendly message when no model is loaded
|
2025-05-09 07:21:05 -07:00 |
|
oobabooga
|
512bc2d0e0
|
UI: Update some labels
|
2025-05-08 23:43:55 -07:00 |
|
oobabooga
|
f8ef6e09af
|
UI: Make ctx-size a slider
|
2025-05-08 18:19:04 -07:00 |
|
oobabooga
|
9ea2a69210
|
llama.cpp: Add --no-webui to the llama-server command
|
2025-05-08 10:41:25 -07:00 |
|
oobabooga
|
1c7209a725
|
Save the chat history periodically during streaming
|
2025-05-08 09:46:43 -07:00 |
|
Jonas
|
fa960496d5
|
Tools support for OpenAI compatible API (#6827)
|
2025-05-08 12:30:27 -03:00 |
|
oobabooga
|
a2ab42d390
|
UI: Remove the exllamav2 info message
|
2025-05-08 08:00:38 -07:00 |
|
oobabooga
|
348d4860c2
|
UI: Create a "Main options" section in the Model tab
|
2025-05-08 07:58:59 -07:00 |
|
oobabooga
|
d2bae7694c
|
UI: Change the ctx-size description
|
2025-05-08 07:26:23 -07:00 |
|
oobabooga
|
b28fa86db6
|
Default --gpu-layers to 256
|
2025-05-06 17:51:55 -07:00 |
|
Downtown-Case
|
5ef564a22e
|
Fix model config loading in shared.py for Python 3.13 (#6961)
|
2025-05-06 17:03:33 -03:00 |
|
oobabooga
|
c4f36db0d8
|
llama.cpp: remove tfs (it doesn't get used)
|
2025-05-06 08:41:13 -07:00 |
|
oobabooga
|
05115e42ee
|
Set top_n_sigma before temperature by default
|
2025-05-06 08:27:21 -07:00 |
|
oobabooga
|
1927afe894
|
Fix top_n_sigma not showing for llama.cpp
|
2025-05-06 08:18:49 -07:00 |
|
oobabooga
|
d1c0154d66
|
llama.cpp: Add top_n_sigma, fix typical_p in sampler priority
|
2025-05-06 06:38:39 -07:00 |
|
mamei16
|
8137eb8ef4
|
Dynamic Chat Message UI Update Speed (#6952)
|
2025-05-05 18:05:23 -03:00 |
|
oobabooga
|
475e012ee8
|
UI: Improve the light theme colors
|
2025-05-05 06:16:29 -07:00 |
|
oobabooga
|
b817bb33fd
|
Minor fix after df7bb0db1f
|
2025-05-05 05:00:20 -07:00 |
|
oobabooga
|
f3da45f65d
|
ExLlamaV3_HF: Change max_chunk_size to 256
|
2025-05-04 20:37:15 -07:00 |
|
oobabooga
|
df7bb0db1f
|
Rename --n-gpu-layers to --gpu-layers
|
2025-05-04 20:03:55 -07:00 |
|
oobabooga
|
d0211afb3c
|
Save the chat history right after sending a message
|
2025-05-04 18:52:01 -07:00 |
|
oobabooga
|
690d693913
|
UI: Add padding to only show the last message/reply after sending a message
To avoid scrolling
|
2025-05-04 18:13:29 -07:00 |
|
oobabooga
|
7853fb1c8d
|
Optimize the Chat tab (#6948)
|
2025-05-04 18:58:37 -03:00 |
|
oobabooga
|
b7a5c7db8d
|
llama.cpp: Handle short arguments in --extra-flags
|
2025-05-04 07:14:42 -07:00 |
|
oobabooga
|
4c2e3b168b
|
llama.cpp: Add a retry mechanism when getting the logits (sometimes it fails)
|
2025-05-03 06:51:20 -07:00 |
|
oobabooga
|
ea60f14674
|
UI: Show the list of files if the user tries to download a GGUF repository
|
2025-05-03 06:06:50 -07:00 |
|
oobabooga
|
b71ef50e9d
|
UI: Add a min-height to prevent constant scrolling during chat streaming
|
2025-05-02 23:45:58 -07:00 |
|
oobabooga
|
d08acb4af9
|
UI: Rename enable_thinking -> Enable thinking
|
2025-05-02 20:50:52 -07:00 |
|
oobabooga
|
4cea720da8
|
UI: Remove the "Autoload the model" feature
|
2025-05-02 16:38:28 -07:00 |
|
oobabooga
|
905afced1c
|
Add a --portable flag to hide things in portable mode
|
2025-05-02 16:34:29 -07:00 |
|
oobabooga
|
3f26b0408b
|
Fix after 9e3867dc83
|
2025-05-02 16:17:22 -07:00 |
|
oobabooga
|
9e3867dc83
|
llama.cpp: Fix manual random seeds
|
2025-05-02 09:36:15 -07:00 |
|
oobabooga
|
b950a0c6db
|
Lint
|
2025-04-30 20:02:10 -07:00 |
|
oobabooga
|
307d13b540
|
UI: Minor label change
|
2025-04-30 18:58:14 -07:00 |
|
oobabooga
|
55283bb8f1
|
Fix CFG with ExLlamaV2_HF (closes #6937)
|
2025-04-30 18:43:45 -07:00 |
|
oobabooga
|
a6c3ec2299
|
llama.cpp: Explicitly send cache_prompt = True
|
2025-04-30 15:24:07 -07:00 |
|
oobabooga
|
195a45c6e1
|
UI: Make thinking blocks closed by default
|
2025-04-30 15:12:46 -07:00 |
|
oobabooga
|
cd5c32dc19
|
UI: Fix max_updates_second not working
|
2025-04-30 14:54:05 -07:00 |
|
oobabooga
|
b46ca01340
|
UI: Set max_updates_second to 12 by default
When the tokens/second at at ~50 and the model is a thinking model,
the markdown rendering for the streaming message becomes a CPU
bottleneck.
|
2025-04-30 14:53:15 -07:00 |
|
oobabooga
|
771d3d8ed6
|
Fix getting the llama.cpp logprobs for Qwen3-30B-A3B
|
2025-04-30 06:48:32 -07:00 |
|
oobabooga
|
1dd4aedbe1
|
Fix the streaming_llm UI checkbox not being interactive
|
2025-04-29 05:28:46 -07:00 |
|
oobabooga
|
d10bded7f8
|
UI: Add an enable_thinking option to enable/disable Qwen3 thinking
|
2025-04-28 22:37:01 -07:00 |
|
oobabooga
|
1ee0acc852
|
llama.cpp: Make --verbose print the llama-server command
|
2025-04-28 15:56:25 -07:00 |
|
oobabooga
|
15a29e99f8
|
Lint
|
2025-04-27 21:41:34 -07:00 |
|
oobabooga
|
be13f5199b
|
UI: Add an info message about how to use Speculative Decoding
|
2025-04-27 21:40:38 -07:00 |
|
oobabooga
|
c6c2855c80
|
llama.cpp: Remove the timeout while loading models (closes #6907)
|
2025-04-27 21:22:21 -07:00 |
|
oobabooga
|
ee0592473c
|
Fix ExLlamaV3_HF leaking memory (attempt)
|
2025-04-27 21:04:02 -07:00 |
|
oobabooga
|
70952553c7
|
Lint
|
2025-04-26 19:29:08 -07:00 |
|
oobabooga
|
7b80acd524
|
Fix parsing --extra-flags
|
2025-04-26 18:40:03 -07:00 |
|
oobabooga
|
943451284f
|
Fix the Notebook tab not loading its default prompt
|
2025-04-26 18:25:06 -07:00 |
|
oobabooga
|
511eb6aa94
|
Fix saving settings to settings.yaml
|
2025-04-26 18:20:00 -07:00 |
|
oobabooga
|
8b83e6f843
|
Prevent Gradio from saying 'Thank you for being a Gradio user!'
|
2025-04-26 18:14:57 -07:00 |
|
oobabooga
|
4a32e1f80c
|
UI: show draft_max for ExLlamaV2
|
2025-04-26 18:01:44 -07:00 |
|
oobabooga
|
0fe3b033d0
|
Fix parsing of --n_ctx and --max_seq_len (2nd attempt)
|
2025-04-26 17:52:21 -07:00 |
|
oobabooga
|
c4afc0421d
|
Fix parsing of --n_ctx and --max_seq_len
|
2025-04-26 17:43:53 -07:00 |
|
oobabooga
|
234aba1c50
|
llama.cpp: Simplify the prompt processing progress indicator
The progress bar was unreliable
|
2025-04-26 17:33:47 -07:00 |
|
oobabooga
|
4ff91b6588
|
Better default settings for Speculative Decoding
|
2025-04-26 17:24:40 -07:00 |
|
oobabooga
|
bc55feaf3e
|
Improve host header validation in local mode
|
2025-04-26 15:42:17 -07:00 |
|
oobabooga
|
3a207e7a57
|
Improve the --help formatting a bit
|
2025-04-26 07:31:04 -07:00 |
|
oobabooga
|
6acb0e1bee
|
Change a UI description
|
2025-04-26 05:13:08 -07:00 |
|
oobabooga
|
cbd4d967cc
|
Update a --help message
|
2025-04-26 05:09:52 -07:00 |
|
oobabooga
|
763a7011c0
|
Remove an ancient/obsolete migration check
|
2025-04-26 04:59:05 -07:00 |
|
oobabooga
|
d9de14d1f7
|
Restructure the repository (#6904)
|
2025-04-26 08:56:54 -03:00 |
|
oobabooga
|
d4017fbb6d
|
ExLlamaV3: Add kv cache quantization (#6903)
|
2025-04-25 21:32:00 -03:00 |
|
oobabooga
|
d4b1e31c49
|
Use --ctx-size to specify the context size for all loaders
Old flags are still recognized as alternatives.
|
2025-04-25 16:59:03 -07:00 |
|
oobabooga
|
faababc4ea
|
llama.cpp: Add a prompt processing progress bar
|
2025-04-25 16:42:30 -07:00 |
|
oobabooga
|
877cf44c08
|
llama.cpp: Add StreamingLLM (--streaming-llm)
|
2025-04-25 16:21:41 -07:00 |
|
oobabooga
|
d35818f4e1
|
UI: Add a collapsible thinking block to messages with <think> steps (#6902)
|
2025-04-25 18:02:02 -03:00 |
|
oobabooga
|
98f4c694b9
|
llama.cpp: Add --extra-flags parameter for passing additional flags to llama-server
|
2025-04-25 07:32:51 -07:00 |
|
oobabooga
|
5861013e68
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-04-24 20:36:20 -07:00 |
|
oobabooga
|
a90df27ff5
|
UI: Add a greeting when the chat history is empty
|
2025-04-24 20:33:40 -07:00 |
|
oobabooga
|
ae1fe87365
|
ExLlamaV2: Add speculative decoding (#6899)
|
2025-04-25 00:11:04 -03:00 |
|
Matthew Jenkins
|
8f2493cc60
|
Prevent llamacpp defaults from locking up consumer hardware (#6870)
|
2025-04-24 23:38:57 -03:00 |
|
oobabooga
|
93fd4ad25d
|
llama.cpp: Document the --device-draft syntax
|
2025-04-24 09:20:11 -07:00 |
|
oobabooga
|
f1b64df8dd
|
EXL2: add another torch.cuda.synchronize() call to prevent errors
|
2025-04-24 09:03:49 -07:00 |
|
oobabooga
|
c71a2af5ab
|
Handle CMD_FLAGS.txt in the main code (closes #6896)
|
2025-04-24 08:21:06 -07:00 |
|
oobabooga
|
bfbde73409
|
Make 'instruct' the default chat mode
|
2025-04-24 07:08:49 -07:00 |
|
oobabooga
|
e99c20bcb0
|
llama.cpp: Add speculative decoding (#6891)
|
2025-04-23 20:10:16 -03:00 |
|
oobabooga
|
9424ba17c8
|
UI: show only part 00001 of multipart GGUF models in the model menu
|
2025-04-22 19:56:42 -07:00 |
|
oobabooga
|
25cf3600aa
|
Lint
|
2025-04-22 08:04:02 -07:00 |
|
oobabooga
|
39cbb5fee0
|
Lint
|
2025-04-22 08:03:25 -07:00 |
|
oobabooga
|
008c6dd682
|
Lint
|
2025-04-22 08:02:37 -07:00 |
|
oobabooga
|
78aeabca89
|
Fix the transformers loader
|
2025-04-21 18:33:14 -07:00 |
|
oobabooga
|
8320190184
|
Fix the exllamav2_HF and exllamav3_HF loaders
|
2025-04-21 18:32:23 -07:00 |
|
oobabooga
|
15989c2ed8
|
Make llama.cpp the default loader
|
2025-04-21 16:36:35 -07:00 |
|
oobabooga
|
86c3ed3218
|
Small change to the unload_model() function
|
2025-04-20 20:00:56 -07:00 |
|
oobabooga
|
fe8e80e04a
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-04-20 19:09:27 -07:00 |
|
oobabooga
|
ff1c00bdd9
|
llama.cpp: set the random seed manually
|
2025-04-20 19:08:44 -07:00 |
|
Matthew Jenkins
|
d3e7c655e5
|
Add support for llama-cpp builds from https://github.com/ggml-org/llama.cpp (#6862)
|
2025-04-20 23:06:24 -03:00 |
|
oobabooga
|
e243424ba1
|
Fix an import
|
2025-04-20 17:51:28 -07:00 |
|
oobabooga
|
8cfd7f976b
|
Revert "Remove the old --model-menu flag"
This reverts commit 109de34e3b.
|
2025-04-20 13:35:42 -07:00 |
|
oobabooga
|
b3bf7a885d
|
Fix ExLlamaV2_HF and ExLlamaV3_HF after ae02ffc605
|
2025-04-20 11:32:48 -07:00 |
|
oobabooga
|
ae02ffc605
|
Refactor the transformers loader (#6859)
|
2025-04-20 13:33:47 -03:00 |
|
oobabooga
|
6ba0164c70
|
Lint
|
2025-04-19 17:45:21 -07:00 |
|
oobabooga
|
5ab069786b
|
llama.cpp: add back the two encode calls (they are harmless now)
|
2025-04-19 17:38:36 -07:00 |
|
oobabooga
|
b9da5c7e3a
|
Use 127.0.0.1 instead of localhost for faster llama.cpp on Windows
|
2025-04-19 17:36:04 -07:00 |
|
oobabooga
|
9c9df2063f
|
llama.cpp: fix unicode decoding (closes #6856)
|
2025-04-19 16:38:15 -07:00 |
|
oobabooga
|
ba976d1390
|
llama.cpp: avoid two 'encode' calls
|
2025-04-19 16:35:01 -07:00 |
|
oobabooga
|
ed42154c78
|
Revert "llama.cpp: close the connection immediately on 'Stop'"
This reverts commit 5fdebc554b.
|
2025-04-19 05:32:36 -07:00 |
|
oobabooga
|
5fdebc554b
|
llama.cpp: close the connection immediately on 'Stop'
|
2025-04-19 04:59:24 -07:00 |
|
oobabooga
|
6589ebeca8
|
Revert "llama.cpp: new optimization attempt"
This reverts commit e2e73ed22f.
|
2025-04-18 21:16:21 -07:00 |
|
oobabooga
|
e2e73ed22f
|
llama.cpp: new optimization attempt
|
2025-04-18 21:05:08 -07:00 |
|
oobabooga
|
e2e90af6cd
|
llama.cpp: don't include --rope-freq-base in the launch command if null
|
2025-04-18 20:51:18 -07:00 |
|
oobabooga
|
9f07a1f5d7
|
llama.cpp: new attempt at optimizing the llama-server connection
|
2025-04-18 19:30:53 -07:00 |
|
oobabooga
|
f727b4a2cc
|
llama.cpp: close the connection properly when generation is cancelled
|
2025-04-18 19:01:39 -07:00 |
|
oobabooga
|
b3342b8dd8
|
llama.cpp: optimize the llama-server connection
|
2025-04-18 18:46:36 -07:00 |
|
oobabooga
|
2002590536
|
Revert "Attempt at making the llama-server streaming more efficient."
This reverts commit 5ad080ff25.
|
2025-04-18 18:13:54 -07:00 |
|
oobabooga
|
71ae05e0a4
|
llama.cpp: Fix the sampler priority handling
|
2025-04-18 18:06:36 -07:00 |
|
oobabooga
|
5ad080ff25
|
Attempt at making the llama-server streaming more efficient.
|
2025-04-18 18:04:49 -07:00 |
|
oobabooga
|
4fabd729c9
|
Fix the API without streaming or without 'sampler_priority' (closes #6851)
|
2025-04-18 17:25:22 -07:00 |
|
oobabooga
|
5135523429
|
Fix the new llama.cpp loader failing to unload models
|
2025-04-18 17:10:26 -07:00 |
|
oobabooga
|
caa6afc88b
|
Only show 'GENERATE_PARAMS=...' in the logits endpoint if use_logits is True
|
2025-04-18 09:57:57 -07:00 |
|
oobabooga
|
d00d713ace
|
Rename get_max_context_length to get_vocabulary_size in the new llama.cpp loader
|
2025-04-18 08:14:15 -07:00 |
|
oobabooga
|
c1cc65e82e
|
Lint
|
2025-04-18 08:06:51 -07:00 |
|
oobabooga
|
d68f0fbdf7
|
Remove obsolete references to llamacpp_HF
|
2025-04-18 07:46:04 -07:00 |
|
oobabooga
|
a0abf93425
|
Connect --rope-freq-base to the new llama.cpp loader
|
2025-04-18 06:53:51 -07:00 |
|
oobabooga
|
ef9910c767
|
Fix a bug after c6901aba9f
|
2025-04-18 06:51:28 -07:00 |
|
oobabooga
|
1c4a2c9a71
|
Make exllamav3 safer as well
|
2025-04-18 06:17:58 -07:00 |
|
oobabooga
|
c6901aba9f
|
Remove deprecation warning code
|
2025-04-18 06:05:47 -07:00 |
|
oobabooga
|
8144e1031e
|
Remove deprecated command-line flags
|
2025-04-18 06:02:28 -07:00 |
|
oobabooga
|
ae54d8faaa
|
New llama.cpp loader (#6846)
|
2025-04-18 09:59:37 -03:00 |
|
oobabooga
|
5c2f8d828e
|
Fix exllamav2 generating eos randomly after previous fix
|
2025-04-18 05:42:38 -07:00 |
|
oobabooga
|
2fc58ad935
|
Consider files with .pt extension in the new model menu function
|
2025-04-17 23:10:43 -07:00 |
|
Googolplexed
|
d78abe480b
|
Allow for model subfolder organization for GGUF files (#6686)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2025-04-18 02:53:59 -03:00 |
|
oobabooga
|
ce9e2d94b1
|
Revert "Attempt at solving the ExLlamaV2 issue"
This reverts commit c9b3c9dfbf.
|
2025-04-17 22:03:21 -07:00 |
|
oobabooga
|
5dfab7d363
|
New attempt at solving the exl2 issue
|
2025-04-17 22:03:11 -07:00 |
|
oobabooga
|
c9b3c9dfbf
|
Attempt at solving the ExLlamaV2 issue
|
2025-04-17 21:45:15 -07:00 |
|
oobabooga
|
2c2d453c8c
|
Revert "Use ExLlamaV2 (instead of the HF one) for EXL2 models for now"
This reverts commit 0ef1b8f8b4.
|
2025-04-17 21:31:32 -07:00 |
|
oobabooga
|
0ef1b8f8b4
|
Use ExLlamaV2 (instead of the HF one) for EXL2 models for now
It doesn't seem to have the "OverflowError" bug
|
2025-04-17 05:47:40 -07:00 |
|
oobabooga
|
682c78ea42
|
Add back detection of GPTQ models (closes #6841)
|
2025-04-11 21:00:42 -07:00 |
|
oobabooga
|
4ed0da74a8
|
Remove the obsolete 'multimodal' extension
|
2025-04-09 20:09:48 -07:00 |
|
oobabooga
|
598568b1ed
|
Revert "UI: remove the streaming cursor"
This reverts commit 6ea0206207.
|
2025-04-09 16:03:14 -07:00 |
|
oobabooga
|
297a406e05
|
UI: smoother chat streaming
This removes the throttling associated to gr.Textbox that made words appears in chunks rather than one at a time
|
2025-04-09 16:02:37 -07:00 |
|
oobabooga
|
6ea0206207
|
UI: remove the streaming cursor
|
2025-04-09 14:59:34 -07:00 |
|
oobabooga
|
8b8d39ec4e
|
Add ExLlamaV3 support (#6832)
|
2025-04-09 00:07:08 -03:00 |
|
oobabooga
|
bf48ec8c44
|
Remove an unnecessary UI message
|
2025-04-07 17:43:41 -07:00 |
|
oobabooga
|
a5855c345c
|
Set context lengths to at most 8192 by default (to prevent out of memory errors) (#6835)
|
2025-04-07 21:42:33 -03:00 |
|
oobabooga
|
109de34e3b
|
Remove the old --model-menu flag
|
2025-03-31 09:24:03 -07:00 |
|
oobabooga
|
758c3f15a5
|
Lint
|
2025-03-14 20:04:43 -07:00 |
|
oobabooga
|
5bcd2d7ad0
|
Add the top N-sigma sampler (#6796)
|
2025-03-14 16:45:11 -03:00 |
|
oobabooga
|
26317a4c7e
|
Fix jinja2 error while loading c4ai-command-a-03-2025
|
2025-03-14 10:59:05 -07:00 |
|
Kelvie Wong
|
16fa9215c4
|
Fix OpenAI API with new param (show_after), closes #6747 (#6749)
---------
Co-authored-by: oobabooga <oobabooga4@gmail.com>
|
2025-02-18 12:01:30 -03:00 |
|
oobabooga
|
dba17c40fc
|
Make transformers 4.49 functional
|
2025-02-17 17:31:11 -08:00 |
|
SamAcctX
|
f28f39792d
|
update deprecated deepspeed import for transformers 4.46+ (#6725)
|
2025-02-02 20:41:36 -03:00 |
|
oobabooga
|
c6f2c2fd7e
|
UI: style improvements
|
2025-02-02 15:34:03 -08:00 |
|
oobabooga
|
0360f54ae8
|
UI: add a "Show after" parameter (to use with DeepSeek </think>)
|
2025-02-02 15:30:09 -08:00 |
|
oobabooga
|
f01cc079b9
|
Lint
|
2025-01-29 14:00:59 -08:00 |
|
oobabooga
|
75ff3f3815
|
UI: Mention common context length values
|
2025-01-25 08:22:23 -08:00 |
|
FP HAM
|
71a551a622
|
Add strftime_now to JINJA to sattisfy LLAMA 3.1 and 3.2 (and granite) (#6692)
|
2025-01-24 11:37:20 -03:00 |
|
oobabooga
|
0485ff20e8
|
Workaround for convert_to_markdown bug
|
2025-01-23 06:21:40 -08:00 |
|
oobabooga
|
39799adc47
|
Add a helpful error message when llama.cpp fails to load the model
|
2025-01-21 12:49:12 -08:00 |
|
oobabooga
|
5e99dded4e
|
UI: add "Continue" and "Remove" buttons below the last chat message
|
2025-01-21 09:05:44 -08:00 |
|
oobabooga
|
0258a6f877
|
Fix the Google Colab notebook
|
2025-01-16 05:21:18 -08:00 |
|
oobabooga
|
1ef748fb20
|
Lint
|
2025-01-14 16:44:15 -08:00 |
|
oobabooga
|
f843cb475b
|
UI: update a help message
|
2025-01-14 08:12:51 -08:00 |
|
oobabooga
|
c832953ff7
|
UI: Activate auto_max_new_tokens by default
|
2025-01-14 05:59:55 -08:00 |
|
Underscore
|
53b838d6c5
|
HTML: Fix quote pair RegEx matching for all quote types (#6661)
|
2025-01-13 18:01:50 -03:00 |
|
oobabooga
|
c85e5e58d0
|
UI: move the new morphdom code to a .js file
|
2025-01-13 06:20:42 -08:00 |
|
oobabooga
|
facb4155d4
|
Fix morphdom leaving ghost elements behind
|
2025-01-11 20:57:28 -08:00 |
|
oobabooga
|
a0492ce325
|
Optimize syntax highlighting during chat streaming (#6655)
|
2025-01-11 21:14:10 -03:00 |
|
mamei16
|
f1797f4323
|
Unescape backslashes in html_output (#6648)
|
2025-01-11 18:39:44 -03:00 |
|
oobabooga
|
1b9121e5b8
|
Add a "refresh" button below the last message, add a missing file
|
2025-01-11 12:42:25 -08:00 |
|
oobabooga
|
a5d64b586d
|
Add a "copy" button below each message (#6654)
|
2025-01-11 16:59:21 -03:00 |
|
oobabooga
|
3a722a36c8
|
Use morphdom to make chat streaming 1902381098231% faster (#6653)
|
2025-01-11 12:55:19 -03:00 |
|
oobabooga
|
d2f6c0f65f
|
Update README
|
2025-01-10 13:25:40 -08:00 |
|
oobabooga
|
c393f7650d
|
Update settings-template.yaml, organize modules/shared.py
|
2025-01-10 13:22:18 -08:00 |
|
oobabooga
|
83c426e96b
|
Organize internals (#6646)
|
2025-01-10 18:04:32 -03:00 |
|
oobabooga
|
7fe46764fb
|
Improve the --help message about --tensorcores as well
|
2025-01-10 07:07:41 -08:00 |
|
oobabooga
|
da6d868f58
|
Remove old deprecated flags (~6 months or more)
|
2025-01-09 16:11:46 -08:00 |
|
oobabooga
|
f3c0f964a2
|
Lint
|
2025-01-09 13:18:23 -08:00 |
|
oobabooga
|
3020f2e5ec
|
UI: improve the info message about --tensorcores
|
2025-01-09 12:44:03 -08:00 |
|
oobabooga
|
c08d87b78d
|
Make the huggingface loader more readable
|
2025-01-09 12:23:38 -08:00 |
|
BPplays
|
619265b32c
|
add ipv6 support to the API (#6559)
|
2025-01-09 10:23:44 -03:00 |
|
oobabooga
|
5c89068168
|
UI: add an info message for the new Static KV cache option
|
2025-01-08 17:36:30 -08:00 |
|
nclok1405
|
b9e2ded6d4
|
Added UnicodeDecodeError workaround for modules/llamacpp_model.py (#6040)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2025-01-08 21:17:31 -03:00 |
|
oobabooga
|
91a8a87887
|
Remove obsolete code
|
2025-01-08 15:07:21 -08:00 |
|
oobabooga
|
7157257c3f
|
Remove the AutoGPTQ loader (#6641)
|
2025-01-08 19:28:56 -03:00 |
|
oobabooga
|
c0f600c887
|
Add a --torch-compile flag for transformers
|
2025-01-05 05:47:00 -08:00 |
|
oobabooga
|
11af199aff
|
Add a "Static KV cache" option for transformers
|
2025-01-04 17:52:57 -08:00 |
|
oobabooga
|
3967520e71
|
Connect XTC, DRY, smoothing_factor, and dynatemp to ExLlamaV2 loader (non-HF)
|
2025-01-04 16:25:06 -08:00 |
|
oobabooga
|
049297fa66
|
UI: reduce the size of CSS sent to the UI during streaming
|
2025-01-04 14:09:36 -08:00 |
|
oobabooga
|
0e673a7a42
|
UI: reduce the size of HTML sent to the UI during streaming
|
2025-01-04 11:40:24 -08:00 |
|
mamei16
|
9f24885bd2
|
Sane handling of markdown lists (#6626)
|
2025-01-04 15:41:31 -03:00 |
|
oobabooga
|
4b3e1b3757
|
UI: add a "Search chats" input field
|
2025-01-02 18:46:40 -08:00 |
|
oobabooga
|
b8fc9010fa
|
UI: fix orjson.JSONDecodeError error on page reload
|
2025-01-02 16:57:04 -08:00 |
|
oobabooga
|
75f1b5ccde
|
UI: add a "Branch chat" button
|
2025-01-02 16:24:18 -08:00 |
|
Petr Korolev
|
13c033c745
|
Fix CUDA error on MPS backend during API request (#6572)
---------
Co-authored-by: oobabooga <oobabooga4@gmail.com>
|
2025-01-02 00:06:11 -03:00 |
|
oobabooga
|
725639118a
|
UI: Use a tab length of 2 for lists (rather than 4)
|
2025-01-01 13:53:50 -08:00 |
|
oobabooga
|
7b88724711
|
Make responses start faster by removing unnecessary cleanup calls (#6625)
|
2025-01-01 18:33:38 -03:00 |
|
oobabooga
|
64853f8509
|
Reapply a necessary change that I removed from #6599 (thanks @mamei16!)
|
2024-12-31 14:43:22 -08:00 |
|
mamei16
|
e953af85cd
|
Fix newlines in the markdown renderer (#6599)
---------
Co-authored-by: oobabooga <oobabooga4@gmail.com>
|
2024-12-31 01:04:02 -03:00 |
|
oobabooga
|
39a5c9a49c
|
UI organization (#6618)
|
2024-12-29 11:16:17 -03:00 |
|
oobabooga
|
0490ee620a
|
UI: increase the threshold for a <li> to be considered long (some more)
|
2024-12-19 16:51:34 -08:00 |
|
oobabooga
|
89888bef56
|
UI: increase the threshold for a <li> to be considered long
|
2024-12-19 14:38:36 -08:00 |
|
oobabooga
|
2acec386fc
|
UI: improve the streaming cursor
|
2024-12-19 14:08:56 -08:00 |
|
oobabooga
|
e2fb86e5df
|
UI: further improve the style of lists and headings
|
2024-12-19 13:59:24 -08:00 |
|
oobabooga
|
c48e4622e8
|
UI: update a link
|
2024-12-18 06:28:14 -08:00 |
|
oobabooga
|
b27f6f8915
|
Lint
|
2024-12-17 20:13:32 -08:00 |
|
oobabooga
|
b051e2c161
|
UI: improve a margin for readability
|
2024-12-17 19:58:21 -08:00 |
|
oobabooga
|
60c93e0c66
|
UI: Set cache_type to fp16 by default
|
2024-12-17 19:44:20 -08:00 |
|
oobabooga
|
ddccc0d657
|
UI: minor change to log messages
|
2024-12-17 19:39:00 -08:00 |
|
oobabooga
|
3030c79e8c
|
UI: show progress while loading a model
|
2024-12-17 19:37:43 -08:00 |
|
Diner Burger
|
addad3c63e
|
Allow more granular KV cache settings (#6561)
|
2024-12-17 17:43:48 -03:00 |
|
oobabooga
|
c43ee5db11
|
UI: very minor color change
|
2024-12-17 07:59:55 -08:00 |
|
oobabooga
|
d769618591
|
Improved UI (#6575)
|
2024-12-17 00:47:41 -03:00 |
|
oobabooga
|
350758f81c
|
UI: Fix the history upload event
|
2024-11-19 20:34:53 -08:00 |
|
oobabooga
|
d01293861b
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2024-11-18 10:15:36 -08:00 |
|
oobabooga
|
3d19746a5d
|
UI: improve HTML rendering for lists with sub-lists
|
2024-11-18 10:14:09 -08:00 |
|
mefich
|
1c937dad72
|
Filter whitespaces in downloader fields in model tab (#6518)
|
2024-11-18 12:01:40 -03:00 |
|
PIRI
|
e1061ba7e3
|
Make token bans work again on HF loaders (#6488)
|
2024-10-24 15:24:02 -03:00 |
|
oobabooga
|
2468cfd8bb
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2024-10-14 13:25:27 -07:00 |
|
oobabooga
|
bb62e796eb
|
Fix locally compiled llama-cpp-python failing to import
|
2024-10-14 13:24:13 -07:00 |
|
oobabooga
|
c9a9f63d1b
|
Fix llama.cpp loader not being random (thanks @reydeljuego12345)
|
2024-10-14 13:07:07 -07:00 |
|
PIRI
|
03a2e70054
|
Fix temperature_last when temperature not in sampler priority (#6439)
|
2024-10-09 11:25:14 -03:00 |
|
oobabooga
|
49dfa0adaf
|
Fix the "save preset" event
|
2024-10-01 11:20:48 -07:00 |
|