oobabooga
|
e7ac06c169
|
New attempt
|
2025-05-10 19:20:04 -07:00 |
|
oobabooga
|
47d4758509
|
Fix #6970
|
2025-05-10 17:46:00 -07:00 |
|
oobabooga
|
4920981b14
|
UI: Remove the typing cursor
|
2025-05-09 20:35:38 -07:00 |
|
oobabooga
|
8984e95c67
|
UI: More friendly message when no model is loaded
|
2025-05-09 07:21:05 -07:00 |
|
oobabooga
|
512bc2d0e0
|
UI: Update some labels
|
2025-05-08 23:43:55 -07:00 |
|
oobabooga
|
f8ef6e09af
|
UI: Make ctx-size a slider
|
2025-05-08 18:19:04 -07:00 |
|
oobabooga
|
9ea2a69210
|
llama.cpp: Add --no-webui to the llama-server command
|
2025-05-08 10:41:25 -07:00 |
|
oobabooga
|
1c7209a725
|
Save the chat history periodically during streaming
|
2025-05-08 09:46:43 -07:00 |
|
Jonas
|
fa960496d5
|
Tools support for OpenAI compatible API (#6827)
|
2025-05-08 12:30:27 -03:00 |
|
oobabooga
|
a2ab42d390
|
UI: Remove the exllamav2 info message
|
2025-05-08 08:00:38 -07:00 |
|
oobabooga
|
348d4860c2
|
UI: Create a "Main options" section in the Model tab
|
2025-05-08 07:58:59 -07:00 |
|
oobabooga
|
d2bae7694c
|
UI: Change the ctx-size description
|
2025-05-08 07:26:23 -07:00 |
|
oobabooga
|
b28fa86db6
|
Default --gpu-layers to 256
|
2025-05-06 17:51:55 -07:00 |
|
Downtown-Case
|
5ef564a22e
|
Fix model config loading in shared.py for Python 3.13 (#6961)
|
2025-05-06 17:03:33 -03:00 |
|
oobabooga
|
c4f36db0d8
|
llama.cpp: remove tfs (it doesn't get used)
|
2025-05-06 08:41:13 -07:00 |
|
oobabooga
|
05115e42ee
|
Set top_n_sigma before temperature by default
|
2025-05-06 08:27:21 -07:00 |
|
oobabooga
|
1927afe894
|
Fix top_n_sigma not showing for llama.cpp
|
2025-05-06 08:18:49 -07:00 |
|
oobabooga
|
d1c0154d66
|
llama.cpp: Add top_n_sigma, fix typical_p in sampler priority
|
2025-05-06 06:38:39 -07:00 |
|
mamei16
|
8137eb8ef4
|
Dynamic Chat Message UI Update Speed (#6952)
|
2025-05-05 18:05:23 -03:00 |
|
oobabooga
|
475e012ee8
|
UI: Improve the light theme colors
|
2025-05-05 06:16:29 -07:00 |
|
oobabooga
|
b817bb33fd
|
Minor fix after df7bb0db1f
|
2025-05-05 05:00:20 -07:00 |
|
oobabooga
|
f3da45f65d
|
ExLlamaV3_HF: Change max_chunk_size to 256
|
2025-05-04 20:37:15 -07:00 |
|
oobabooga
|
df7bb0db1f
|
Rename --n-gpu-layers to --gpu-layers
|
2025-05-04 20:03:55 -07:00 |
|
oobabooga
|
d0211afb3c
|
Save the chat history right after sending a message
|
2025-05-04 18:52:01 -07:00 |
|
oobabooga
|
690d693913
|
UI: Add padding to only show the last message/reply after sending a message
To avoid scrolling
|
2025-05-04 18:13:29 -07:00 |
|
oobabooga
|
7853fb1c8d
|
Optimize the Chat tab (#6948)
|
2025-05-04 18:58:37 -03:00 |
|
oobabooga
|
b7a5c7db8d
|
llama.cpp: Handle short arguments in --extra-flags
|
2025-05-04 07:14:42 -07:00 |
|
oobabooga
|
4c2e3b168b
|
llama.cpp: Add a retry mechanism when getting the logits (sometimes it fails)
|
2025-05-03 06:51:20 -07:00 |
|
oobabooga
|
ea60f14674
|
UI: Show the list of files if the user tries to download a GGUF repository
|
2025-05-03 06:06:50 -07:00 |
|
oobabooga
|
b71ef50e9d
|
UI: Add a min-height to prevent constant scrolling during chat streaming
|
2025-05-02 23:45:58 -07:00 |
|
oobabooga
|
d08acb4af9
|
UI: Rename enable_thinking -> Enable thinking
|
2025-05-02 20:50:52 -07:00 |
|
oobabooga
|
4cea720da8
|
UI: Remove the "Autoload the model" feature
|
2025-05-02 16:38:28 -07:00 |
|
oobabooga
|
905afced1c
|
Add a --portable flag to hide things in portable mode
|
2025-05-02 16:34:29 -07:00 |
|
oobabooga
|
3f26b0408b
|
Fix after 9e3867dc83
|
2025-05-02 16:17:22 -07:00 |
|
oobabooga
|
9e3867dc83
|
llama.cpp: Fix manual random seeds
|
2025-05-02 09:36:15 -07:00 |
|
oobabooga
|
b950a0c6db
|
Lint
|
2025-04-30 20:02:10 -07:00 |
|
oobabooga
|
307d13b540
|
UI: Minor label change
|
2025-04-30 18:58:14 -07:00 |
|
oobabooga
|
55283bb8f1
|
Fix CFG with ExLlamaV2_HF (closes #6937)
|
2025-04-30 18:43:45 -07:00 |
|
oobabooga
|
a6c3ec2299
|
llama.cpp: Explicitly send cache_prompt = True
|
2025-04-30 15:24:07 -07:00 |
|
oobabooga
|
195a45c6e1
|
UI: Make thinking blocks closed by default
|
2025-04-30 15:12:46 -07:00 |
|
oobabooga
|
cd5c32dc19
|
UI: Fix max_updates_second not working
|
2025-04-30 14:54:05 -07:00 |
|
oobabooga
|
b46ca01340
|
UI: Set max_updates_second to 12 by default
When the tokens/second at at ~50 and the model is a thinking model,
the markdown rendering for the streaming message becomes a CPU
bottleneck.
|
2025-04-30 14:53:15 -07:00 |
|
oobabooga
|
771d3d8ed6
|
Fix getting the llama.cpp logprobs for Qwen3-30B-A3B
|
2025-04-30 06:48:32 -07:00 |
|
oobabooga
|
1dd4aedbe1
|
Fix the streaming_llm UI checkbox not being interactive
|
2025-04-29 05:28:46 -07:00 |
|
oobabooga
|
d10bded7f8
|
UI: Add an enable_thinking option to enable/disable Qwen3 thinking
|
2025-04-28 22:37:01 -07:00 |
|
oobabooga
|
1ee0acc852
|
llama.cpp: Make --verbose print the llama-server command
|
2025-04-28 15:56:25 -07:00 |
|
oobabooga
|
15a29e99f8
|
Lint
|
2025-04-27 21:41:34 -07:00 |
|
oobabooga
|
be13f5199b
|
UI: Add an info message about how to use Speculative Decoding
|
2025-04-27 21:40:38 -07:00 |
|
oobabooga
|
c6c2855c80
|
llama.cpp: Remove the timeout while loading models (closes #6907)
|
2025-04-27 21:22:21 -07:00 |
|
oobabooga
|
ee0592473c
|
Fix ExLlamaV3_HF leaking memory (attempt)
|
2025-04-27 21:04:02 -07:00 |
|
oobabooga
|
70952553c7
|
Lint
|
2025-04-26 19:29:08 -07:00 |
|
oobabooga
|
7b80acd524
|
Fix parsing --extra-flags
|
2025-04-26 18:40:03 -07:00 |
|
oobabooga
|
943451284f
|
Fix the Notebook tab not loading its default prompt
|
2025-04-26 18:25:06 -07:00 |
|
oobabooga
|
511eb6aa94
|
Fix saving settings to settings.yaml
|
2025-04-26 18:20:00 -07:00 |
|
oobabooga
|
8b83e6f843
|
Prevent Gradio from saying 'Thank you for being a Gradio user!'
|
2025-04-26 18:14:57 -07:00 |
|
oobabooga
|
4a32e1f80c
|
UI: show draft_max for ExLlamaV2
|
2025-04-26 18:01:44 -07:00 |
|
oobabooga
|
0fe3b033d0
|
Fix parsing of --n_ctx and --max_seq_len (2nd attempt)
|
2025-04-26 17:52:21 -07:00 |
|
oobabooga
|
c4afc0421d
|
Fix parsing of --n_ctx and --max_seq_len
|
2025-04-26 17:43:53 -07:00 |
|
oobabooga
|
234aba1c50
|
llama.cpp: Simplify the prompt processing progress indicator
The progress bar was unreliable
|
2025-04-26 17:33:47 -07:00 |
|
oobabooga
|
4ff91b6588
|
Better default settings for Speculative Decoding
|
2025-04-26 17:24:40 -07:00 |
|
oobabooga
|
bc55feaf3e
|
Improve host header validation in local mode
|
2025-04-26 15:42:17 -07:00 |
|
oobabooga
|
3a207e7a57
|
Improve the --help formatting a bit
|
2025-04-26 07:31:04 -07:00 |
|
oobabooga
|
6acb0e1bee
|
Change a UI description
|
2025-04-26 05:13:08 -07:00 |
|
oobabooga
|
cbd4d967cc
|
Update a --help message
|
2025-04-26 05:09:52 -07:00 |
|
oobabooga
|
763a7011c0
|
Remove an ancient/obsolete migration check
|
2025-04-26 04:59:05 -07:00 |
|
oobabooga
|
d9de14d1f7
|
Restructure the repository (#6904)
|
2025-04-26 08:56:54 -03:00 |
|
oobabooga
|
d4017fbb6d
|
ExLlamaV3: Add kv cache quantization (#6903)
|
2025-04-25 21:32:00 -03:00 |
|
oobabooga
|
d4b1e31c49
|
Use --ctx-size to specify the context size for all loaders
Old flags are still recognized as alternatives.
|
2025-04-25 16:59:03 -07:00 |
|
oobabooga
|
faababc4ea
|
llama.cpp: Add a prompt processing progress bar
|
2025-04-25 16:42:30 -07:00 |
|
oobabooga
|
877cf44c08
|
llama.cpp: Add StreamingLLM (--streaming-llm)
|
2025-04-25 16:21:41 -07:00 |
|
oobabooga
|
d35818f4e1
|
UI: Add a collapsible thinking block to messages with <think> steps (#6902)
|
2025-04-25 18:02:02 -03:00 |
|
oobabooga
|
98f4c694b9
|
llama.cpp: Add --extra-flags parameter for passing additional flags to llama-server
|
2025-04-25 07:32:51 -07:00 |
|
oobabooga
|
5861013e68
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-04-24 20:36:20 -07:00 |
|
oobabooga
|
a90df27ff5
|
UI: Add a greeting when the chat history is empty
|
2025-04-24 20:33:40 -07:00 |
|
oobabooga
|
ae1fe87365
|
ExLlamaV2: Add speculative decoding (#6899)
|
2025-04-25 00:11:04 -03:00 |
|
Matthew Jenkins
|
8f2493cc60
|
Prevent llamacpp defaults from locking up consumer hardware (#6870)
|
2025-04-24 23:38:57 -03:00 |
|
oobabooga
|
93fd4ad25d
|
llama.cpp: Document the --device-draft syntax
|
2025-04-24 09:20:11 -07:00 |
|
oobabooga
|
f1b64df8dd
|
EXL2: add another torch.cuda.synchronize() call to prevent errors
|
2025-04-24 09:03:49 -07:00 |
|
oobabooga
|
c71a2af5ab
|
Handle CMD_FLAGS.txt in the main code (closes #6896)
|
2025-04-24 08:21:06 -07:00 |
|
oobabooga
|
bfbde73409
|
Make 'instruct' the default chat mode
|
2025-04-24 07:08:49 -07:00 |
|
oobabooga
|
e99c20bcb0
|
llama.cpp: Add speculative decoding (#6891)
|
2025-04-23 20:10:16 -03:00 |
|
oobabooga
|
9424ba17c8
|
UI: show only part 00001 of multipart GGUF models in the model menu
|
2025-04-22 19:56:42 -07:00 |
|
oobabooga
|
25cf3600aa
|
Lint
|
2025-04-22 08:04:02 -07:00 |
|
oobabooga
|
39cbb5fee0
|
Lint
|
2025-04-22 08:03:25 -07:00 |
|
oobabooga
|
008c6dd682
|
Lint
|
2025-04-22 08:02:37 -07:00 |
|
oobabooga
|
78aeabca89
|
Fix the transformers loader
|
2025-04-21 18:33:14 -07:00 |
|
oobabooga
|
8320190184
|
Fix the exllamav2_HF and exllamav3_HF loaders
|
2025-04-21 18:32:23 -07:00 |
|
oobabooga
|
15989c2ed8
|
Make llama.cpp the default loader
|
2025-04-21 16:36:35 -07:00 |
|
oobabooga
|
86c3ed3218
|
Small change to the unload_model() function
|
2025-04-20 20:00:56 -07:00 |
|
oobabooga
|
fe8e80e04a
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-04-20 19:09:27 -07:00 |
|
oobabooga
|
ff1c00bdd9
|
llama.cpp: set the random seed manually
|
2025-04-20 19:08:44 -07:00 |
|
Matthew Jenkins
|
d3e7c655e5
|
Add support for llama-cpp builds from https://github.com/ggml-org/llama.cpp (#6862)
|
2025-04-20 23:06:24 -03:00 |
|
oobabooga
|
e243424ba1
|
Fix an import
|
2025-04-20 17:51:28 -07:00 |
|
oobabooga
|
8cfd7f976b
|
Revert "Remove the old --model-menu flag"
This reverts commit 109de34e3b.
|
2025-04-20 13:35:42 -07:00 |
|
oobabooga
|
b3bf7a885d
|
Fix ExLlamaV2_HF and ExLlamaV3_HF after ae02ffc605
|
2025-04-20 11:32:48 -07:00 |
|
oobabooga
|
ae02ffc605
|
Refactor the transformers loader (#6859)
|
2025-04-20 13:33:47 -03:00 |
|
oobabooga
|
6ba0164c70
|
Lint
|
2025-04-19 17:45:21 -07:00 |
|
oobabooga
|
5ab069786b
|
llama.cpp: add back the two encode calls (they are harmless now)
|
2025-04-19 17:38:36 -07:00 |
|
oobabooga
|
b9da5c7e3a
|
Use 127.0.0.1 instead of localhost for faster llama.cpp on Windows
|
2025-04-19 17:36:04 -07:00 |
|
oobabooga
|
9c9df2063f
|
llama.cpp: fix unicode decoding (closes #6856)
|
2025-04-19 16:38:15 -07:00 |
|