Commit graph

368 commits

Author SHA1 Message Date
oobabooga e015355e4a Update README 2025-07-09 20:03:53 -07:00
oobabooga bd4881c4dc Use eager attention by default instead of sdpa 2025-07-09 19:57:37 -07:00
oobabooga 6c2bdda0f0 Transformers loader: replace use_flash_attention_2/use_eager_attention with a unified attn_implementation
Closes #7107
2025-07-09 18:39:37 -07:00
oobabooga faae4dc1b0
Autosave generated text in the Notebook tab (#7079) 2025-06-16 17:36:05 -03:00
oobabooga de24b3bb31
Merge the Default and Notebook tabs into a single Notebook tab (#7078) 2025-06-16 13:19:29 -03:00
oobabooga 2dee3a66ff Add an option to include/exclude attachments from previous messages in the chat prompt 2025-06-12 21:37:18 -07:00
oobabooga 004fd8316c Minor changes 2025-06-11 07:49:51 -07:00
oobabooga 27140f3563 Revert "Don't save active extensions through the UI"
This reverts commit df98f4b331.
2025-06-11 07:25:27 -07:00
oobabooga 3f9eb3aad1 Fix the preset dropdown when the default preset file is not present 2025-06-10 14:22:37 -07:00
oobabooga df98f4b331 Don't save active extensions through the UI
Prevents command-line activated extensions from becoming permanently active due to autosave.
2025-06-09 20:28:16 -07:00
oobabooga 84f66484c5 Make it optional to paste long pasted content to an attachment 2025-06-08 09:31:38 -07:00
oobabooga 1bdf11b511 Use the Qwen3 - Thinking preset by default 2025-06-07 22:23:09 -07:00
oobabooga caf9fca5f3 Avoid some code repetition 2025-06-07 22:11:35 -07:00
oobabooga 6436bf1920
More UI persistence: presets and characters (#7051) 2025-06-08 01:58:02 -03:00
oobabooga 35ed55d18f
UI persistence (#7050) 2025-06-07 22:46:52 -03:00
oobabooga bb409c926e
Update only the last message during streaming + add back dynamic UI update speed (#7038) 2025-06-02 09:50:17 -03:00
oobabooga 9ec46b8c44 Remove the HQQ loader (HQQ models can be loaded through Transformers) 2025-05-19 09:23:24 -07:00
oobabooga 126b3a768f Revert "Dynamic Chat Message UI Update Speed (#6952)" (for now)
This reverts commit 8137eb8ef4.
2025-05-18 12:38:36 -07:00
oobabooga 47d4758509 Fix #6970 2025-05-10 17:46:00 -07:00
oobabooga b28fa86db6 Default --gpu-layers to 256 2025-05-06 17:51:55 -07:00
Downtown-Case 5ef564a22e
Fix model config loading in shared.py for Python 3.13 (#6961) 2025-05-06 17:03:33 -03:00
mamei16 8137eb8ef4
Dynamic Chat Message UI Update Speed (#6952) 2025-05-05 18:05:23 -03:00
oobabooga df7bb0db1f Rename --n-gpu-layers to --gpu-layers 2025-05-04 20:03:55 -07:00
oobabooga 4cea720da8 UI: Remove the "Autoload the model" feature 2025-05-02 16:38:28 -07:00
oobabooga 905afced1c Add a --portable flag to hide things in portable mode 2025-05-02 16:34:29 -07:00
oobabooga b46ca01340 UI: Set max_updates_second to 12 by default
When the tokens/second at at ~50 and the model is a thinking model,
the markdown rendering for the streaming message becomes a CPU
bottleneck.
2025-04-30 14:53:15 -07:00
oobabooga d10bded7f8 UI: Add an enable_thinking option to enable/disable Qwen3 thinking 2025-04-28 22:37:01 -07:00
oobabooga 7b80acd524 Fix parsing --extra-flags 2025-04-26 18:40:03 -07:00
oobabooga 0fe3b033d0 Fix parsing of --n_ctx and --max_seq_len (2nd attempt) 2025-04-26 17:52:21 -07:00
oobabooga c4afc0421d Fix parsing of --n_ctx and --max_seq_len 2025-04-26 17:43:53 -07:00
oobabooga 4ff91b6588 Better default settings for Speculative Decoding 2025-04-26 17:24:40 -07:00
oobabooga 3a207e7a57 Improve the --help formatting a bit 2025-04-26 07:31:04 -07:00
oobabooga cbd4d967cc Update a --help message 2025-04-26 05:09:52 -07:00
oobabooga d9de14d1f7
Restructure the repository (#6904) 2025-04-26 08:56:54 -03:00
oobabooga d4017fbb6d
ExLlamaV3: Add kv cache quantization (#6903) 2025-04-25 21:32:00 -03:00
oobabooga d4b1e31c49 Use --ctx-size to specify the context size for all loaders
Old flags are still recognized as alternatives.
2025-04-25 16:59:03 -07:00
oobabooga 877cf44c08 llama.cpp: Add StreamingLLM (--streaming-llm) 2025-04-25 16:21:41 -07:00
oobabooga d35818f4e1
UI: Add a collapsible thinking block to messages with <think> steps (#6902) 2025-04-25 18:02:02 -03:00
oobabooga 98f4c694b9 llama.cpp: Add --extra-flags parameter for passing additional flags to llama-server 2025-04-25 07:32:51 -07:00
Matthew Jenkins 8f2493cc60
Prevent llamacpp defaults from locking up consumer hardware (#6870) 2025-04-24 23:38:57 -03:00
oobabooga 93fd4ad25d llama.cpp: Document the --device-draft syntax 2025-04-24 09:20:11 -07:00
oobabooga c71a2af5ab Handle CMD_FLAGS.txt in the main code (closes #6896) 2025-04-24 08:21:06 -07:00
oobabooga bfbde73409 Make 'instruct' the default chat mode 2025-04-24 07:08:49 -07:00
oobabooga e99c20bcb0
llama.cpp: Add speculative decoding (#6891) 2025-04-23 20:10:16 -03:00
oobabooga 8cfd7f976b Revert "Remove the old --model-menu flag"
This reverts commit 109de34e3b.
2025-04-20 13:35:42 -07:00
oobabooga ae02ffc605
Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
oobabooga d68f0fbdf7 Remove obsolete references to llamacpp_HF 2025-04-18 07:46:04 -07:00
oobabooga c6901aba9f Remove deprecation warning code 2025-04-18 06:05:47 -07:00
oobabooga 8144e1031e Remove deprecated command-line flags 2025-04-18 06:02:28 -07:00
oobabooga ae54d8faaa
New llama.cpp loader (#6846) 2025-04-18 09:59:37 -03:00