Commit graph

1685 commits

Author SHA1 Message Date
oobabooga f3da45f65d ExLlamaV3_HF: Change max_chunk_size to 256 2025-05-04 20:37:15 -07:00
oobabooga df7bb0db1f Rename --n-gpu-layers to --gpu-layers 2025-05-04 20:03:55 -07:00
oobabooga d0211afb3c Save the chat history right after sending a message 2025-05-04 18:52:01 -07:00
oobabooga 690d693913 UI: Add padding to only show the last message/reply after sending a message
To avoid scrolling
2025-05-04 18:13:29 -07:00
oobabooga 7853fb1c8d
Optimize the Chat tab (#6948) 2025-05-04 18:58:37 -03:00
oobabooga b7a5c7db8d llama.cpp: Handle short arguments in --extra-flags 2025-05-04 07:14:42 -07:00
oobabooga 4c2e3b168b llama.cpp: Add a retry mechanism when getting the logits (sometimes it fails) 2025-05-03 06:51:20 -07:00
oobabooga ea60f14674 UI: Show the list of files if the user tries to download a GGUF repository 2025-05-03 06:06:50 -07:00
oobabooga b71ef50e9d UI: Add a min-height to prevent constant scrolling during chat streaming 2025-05-02 23:45:58 -07:00
oobabooga d08acb4af9 UI: Rename enable_thinking -> Enable thinking 2025-05-02 20:50:52 -07:00
oobabooga 4cea720da8 UI: Remove the "Autoload the model" feature 2025-05-02 16:38:28 -07:00
oobabooga 905afced1c Add a --portable flag to hide things in portable mode 2025-05-02 16:34:29 -07:00
oobabooga 3f26b0408b Fix after 9e3867dc83 2025-05-02 16:17:22 -07:00
oobabooga 9e3867dc83 llama.cpp: Fix manual random seeds 2025-05-02 09:36:15 -07:00
oobabooga b950a0c6db Lint 2025-04-30 20:02:10 -07:00
oobabooga 307d13b540 UI: Minor label change 2025-04-30 18:58:14 -07:00
oobabooga 55283bb8f1 Fix CFG with ExLlamaV2_HF (closes #6937) 2025-04-30 18:43:45 -07:00
oobabooga a6c3ec2299 llama.cpp: Explicitly send cache_prompt = True 2025-04-30 15:24:07 -07:00
oobabooga 195a45c6e1 UI: Make thinking blocks closed by default 2025-04-30 15:12:46 -07:00
oobabooga cd5c32dc19 UI: Fix max_updates_second not working 2025-04-30 14:54:05 -07:00
oobabooga b46ca01340 UI: Set max_updates_second to 12 by default
When the tokens/second at at ~50 and the model is a thinking model,
the markdown rendering for the streaming message becomes a CPU
bottleneck.
2025-04-30 14:53:15 -07:00
oobabooga 771d3d8ed6 Fix getting the llama.cpp logprobs for Qwen3-30B-A3B 2025-04-30 06:48:32 -07:00
oobabooga 1dd4aedbe1 Fix the streaming_llm UI checkbox not being interactive 2025-04-29 05:28:46 -07:00
oobabooga d10bded7f8 UI: Add an enable_thinking option to enable/disable Qwen3 thinking 2025-04-28 22:37:01 -07:00
oobabooga 1ee0acc852 llama.cpp: Make --verbose print the llama-server command 2025-04-28 15:56:25 -07:00
oobabooga 15a29e99f8 Lint 2025-04-27 21:41:34 -07:00
oobabooga be13f5199b UI: Add an info message about how to use Speculative Decoding 2025-04-27 21:40:38 -07:00
oobabooga c6c2855c80 llama.cpp: Remove the timeout while loading models (closes #6907) 2025-04-27 21:22:21 -07:00
oobabooga ee0592473c Fix ExLlamaV3_HF leaking memory (attempt) 2025-04-27 21:04:02 -07:00
oobabooga 70952553c7 Lint 2025-04-26 19:29:08 -07:00
oobabooga 7b80acd524 Fix parsing --extra-flags 2025-04-26 18:40:03 -07:00
oobabooga 943451284f Fix the Notebook tab not loading its default prompt 2025-04-26 18:25:06 -07:00
oobabooga 511eb6aa94 Fix saving settings to settings.yaml 2025-04-26 18:20:00 -07:00
oobabooga 8b83e6f843 Prevent Gradio from saying 'Thank you for being a Gradio user!' 2025-04-26 18:14:57 -07:00
oobabooga 4a32e1f80c UI: show draft_max for ExLlamaV2 2025-04-26 18:01:44 -07:00
oobabooga 0fe3b033d0 Fix parsing of --n_ctx and --max_seq_len (2nd attempt) 2025-04-26 17:52:21 -07:00
oobabooga c4afc0421d Fix parsing of --n_ctx and --max_seq_len 2025-04-26 17:43:53 -07:00
oobabooga 234aba1c50 llama.cpp: Simplify the prompt processing progress indicator
The progress bar was unreliable
2025-04-26 17:33:47 -07:00
oobabooga 4ff91b6588 Better default settings for Speculative Decoding 2025-04-26 17:24:40 -07:00
oobabooga bc55feaf3e Improve host header validation in local mode 2025-04-26 15:42:17 -07:00
oobabooga 3a207e7a57 Improve the --help formatting a bit 2025-04-26 07:31:04 -07:00
oobabooga 6acb0e1bee Change a UI description 2025-04-26 05:13:08 -07:00
oobabooga cbd4d967cc Update a --help message 2025-04-26 05:09:52 -07:00
oobabooga 763a7011c0 Remove an ancient/obsolete migration check 2025-04-26 04:59:05 -07:00
oobabooga d9de14d1f7
Restructure the repository (#6904) 2025-04-26 08:56:54 -03:00
oobabooga d4017fbb6d
ExLlamaV3: Add kv cache quantization (#6903) 2025-04-25 21:32:00 -03:00
oobabooga d4b1e31c49 Use --ctx-size to specify the context size for all loaders
Old flags are still recognized as alternatives.
2025-04-25 16:59:03 -07:00
oobabooga faababc4ea llama.cpp: Add a prompt processing progress bar 2025-04-25 16:42:30 -07:00
oobabooga 877cf44c08 llama.cpp: Add StreamingLLM (--streaming-llm) 2025-04-25 16:21:41 -07:00
oobabooga d35818f4e1
UI: Add a collapsible thinking block to messages with <think> steps (#6902) 2025-04-25 18:02:02 -03:00
oobabooga 98f4c694b9 llama.cpp: Add --extra-flags parameter for passing additional flags to llama-server 2025-04-25 07:32:51 -07:00
oobabooga 5861013e68 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-04-24 20:36:20 -07:00
oobabooga a90df27ff5 UI: Add a greeting when the chat history is empty 2025-04-24 20:33:40 -07:00
oobabooga ae1fe87365
ExLlamaV2: Add speculative decoding (#6899) 2025-04-25 00:11:04 -03:00
Matthew Jenkins 8f2493cc60
Prevent llamacpp defaults from locking up consumer hardware (#6870) 2025-04-24 23:38:57 -03:00
oobabooga 93fd4ad25d llama.cpp: Document the --device-draft syntax 2025-04-24 09:20:11 -07:00
oobabooga f1b64df8dd EXL2: add another torch.cuda.synchronize() call to prevent errors 2025-04-24 09:03:49 -07:00
oobabooga c71a2af5ab Handle CMD_FLAGS.txt in the main code (closes #6896) 2025-04-24 08:21:06 -07:00
oobabooga bfbde73409 Make 'instruct' the default chat mode 2025-04-24 07:08:49 -07:00
oobabooga e99c20bcb0
llama.cpp: Add speculative decoding (#6891) 2025-04-23 20:10:16 -03:00
oobabooga 9424ba17c8 UI: show only part 00001 of multipart GGUF models in the model menu 2025-04-22 19:56:42 -07:00
oobabooga 25cf3600aa Lint 2025-04-22 08:04:02 -07:00
oobabooga 39cbb5fee0 Lint 2025-04-22 08:03:25 -07:00
oobabooga 008c6dd682 Lint 2025-04-22 08:02:37 -07:00
oobabooga 78aeabca89 Fix the transformers loader 2025-04-21 18:33:14 -07:00
oobabooga 8320190184 Fix the exllamav2_HF and exllamav3_HF loaders 2025-04-21 18:32:23 -07:00
oobabooga 15989c2ed8 Make llama.cpp the default loader 2025-04-21 16:36:35 -07:00
oobabooga 86c3ed3218 Small change to the unload_model() function 2025-04-20 20:00:56 -07:00
oobabooga fe8e80e04a Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-04-20 19:09:27 -07:00
oobabooga ff1c00bdd9 llama.cpp: set the random seed manually 2025-04-20 19:08:44 -07:00
Matthew Jenkins d3e7c655e5
Add support for llama-cpp builds from https://github.com/ggml-org/llama.cpp (#6862) 2025-04-20 23:06:24 -03:00
oobabooga e243424ba1 Fix an import 2025-04-20 17:51:28 -07:00
oobabooga 8cfd7f976b Revert "Remove the old --model-menu flag"
This reverts commit 109de34e3b.
2025-04-20 13:35:42 -07:00
oobabooga b3bf7a885d Fix ExLlamaV2_HF and ExLlamaV3_HF after ae02ffc605 2025-04-20 11:32:48 -07:00
oobabooga ae02ffc605
Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
oobabooga 6ba0164c70 Lint 2025-04-19 17:45:21 -07:00
oobabooga 5ab069786b llama.cpp: add back the two encode calls (they are harmless now) 2025-04-19 17:38:36 -07:00
oobabooga b9da5c7e3a Use 127.0.0.1 instead of localhost for faster llama.cpp on Windows 2025-04-19 17:36:04 -07:00
oobabooga 9c9df2063f llama.cpp: fix unicode decoding (closes #6856) 2025-04-19 16:38:15 -07:00
oobabooga ba976d1390 llama.cpp: avoid two 'encode' calls 2025-04-19 16:35:01 -07:00
oobabooga ed42154c78 Revert "llama.cpp: close the connection immediately on 'Stop'"
This reverts commit 5fdebc554b.
2025-04-19 05:32:36 -07:00
oobabooga 5fdebc554b llama.cpp: close the connection immediately on 'Stop' 2025-04-19 04:59:24 -07:00
oobabooga 6589ebeca8 Revert "llama.cpp: new optimization attempt"
This reverts commit e2e73ed22f.
2025-04-18 21:16:21 -07:00
oobabooga e2e73ed22f llama.cpp: new optimization attempt 2025-04-18 21:05:08 -07:00
oobabooga e2e90af6cd llama.cpp: don't include --rope-freq-base in the launch command if null 2025-04-18 20:51:18 -07:00
oobabooga 9f07a1f5d7 llama.cpp: new attempt at optimizing the llama-server connection 2025-04-18 19:30:53 -07:00
oobabooga f727b4a2cc llama.cpp: close the connection properly when generation is cancelled 2025-04-18 19:01:39 -07:00
oobabooga b3342b8dd8 llama.cpp: optimize the llama-server connection 2025-04-18 18:46:36 -07:00
oobabooga 2002590536 Revert "Attempt at making the llama-server streaming more efficient."
This reverts commit 5ad080ff25.
2025-04-18 18:13:54 -07:00
oobabooga 71ae05e0a4 llama.cpp: Fix the sampler priority handling 2025-04-18 18:06:36 -07:00
oobabooga 5ad080ff25 Attempt at making the llama-server streaming more efficient. 2025-04-18 18:04:49 -07:00
oobabooga 4fabd729c9 Fix the API without streaming or without 'sampler_priority' (closes #6851) 2025-04-18 17:25:22 -07:00
oobabooga 5135523429 Fix the new llama.cpp loader failing to unload models 2025-04-18 17:10:26 -07:00
oobabooga caa6afc88b Only show 'GENERATE_PARAMS=...' in the logits endpoint if use_logits is True 2025-04-18 09:57:57 -07:00
oobabooga d00d713ace Rename get_max_context_length to get_vocabulary_size in the new llama.cpp loader 2025-04-18 08:14:15 -07:00
oobabooga c1cc65e82e Lint 2025-04-18 08:06:51 -07:00
oobabooga d68f0fbdf7 Remove obsolete references to llamacpp_HF 2025-04-18 07:46:04 -07:00
oobabooga a0abf93425 Connect --rope-freq-base to the new llama.cpp loader 2025-04-18 06:53:51 -07:00
oobabooga ef9910c767 Fix a bug after c6901aba9f 2025-04-18 06:51:28 -07:00
oobabooga 1c4a2c9a71 Make exllamav3 safer as well 2025-04-18 06:17:58 -07:00