Commit graph

4396 commits

Author SHA1 Message Date
oobabooga 85bf2e15b9 API: Remove obsolete multimodal extension handling
Multimodal support will be added back once it's implemented in llama-server.
2025-05-05 14:14:48 -07:00
mamei16 8137eb8ef4
Dynamic Chat Message UI Update Speed (#6952) 2025-05-05 18:05:23 -03:00
oobabooga 53d8e46502 Ensure environment isolation in portable installs 2025-05-05 12:28:17 -07:00
oobabooga bf5290bc0f Fix the hover menu in light theme 2025-05-05 08:04:12 -07:00
oobabooga 967b70327e Light theme improvement 2025-05-05 07:59:02 -07:00
oobabooga 6001d279c6 Light theme improvement 2025-05-05 07:42:13 -07:00
oobabooga 475e012ee8 UI: Improve the light theme colors 2025-05-05 06:16:29 -07:00
oobabooga b817bb33fd Minor fix after df7bb0db1f 2025-05-05 05:00:20 -07:00
oobabooga f3da45f65d ExLlamaV3_HF: Change max_chunk_size to 256 2025-05-04 20:37:15 -07:00
oobabooga df7bb0db1f Rename --n-gpu-layers to --gpu-layers 2025-05-04 20:03:55 -07:00
oobabooga d0211afb3c Save the chat history right after sending a message 2025-05-04 18:52:01 -07:00
oobabooga 2da197bba4 Refinement after previous commit 2025-05-04 18:29:05 -07:00
oobabooga 690d693913 UI: Add padding to only show the last message/reply after sending a message
To avoid scrolling
2025-05-04 18:13:29 -07:00
oobabooga d9da16edba UI: Remove the chat input textarea border 2025-05-04 16:53:52 -07:00
oobabooga 84ab1f95be UI: Increase the chat area a bit 2025-05-04 15:21:52 -07:00
oobabooga d186621926 UI: Fixes after previous commit 2025-05-04 15:19:46 -07:00
oobabooga 7853fb1c8d
Optimize the Chat tab (#6948) 2025-05-04 18:58:37 -03:00
oobabooga b7a5c7db8d llama.cpp: Handle short arguments in --extra-flags 2025-05-04 07:14:42 -07:00
oobabooga 5f5569e9ac Update README 2025-05-04 06:20:36 -07:00
oobabooga 4c2e3b168b llama.cpp: Add a retry mechanism when getting the logits (sometimes it fails) 2025-05-03 06:51:20 -07:00
oobabooga ea60f14674 UI: Show the list of files if the user tries to download a GGUF repository 2025-05-03 06:06:50 -07:00
oobabooga b71ef50e9d UI: Add a min-height to prevent constant scrolling during chat streaming 2025-05-02 23:45:58 -07:00
oobabooga b21bd8bb1e UI: Invert user/assistant message colors in instruct mode
The goal is to make assistant messages more readable.
2025-05-02 22:43:33 -07:00
oobabooga d08acb4af9 UI: Rename enable_thinking -> Enable thinking 2025-05-02 20:50:52 -07:00
oobabooga 3526b7923c Remove extensions with requirements from portable builds 2025-05-02 17:40:53 -07:00
oobabooga 4cea720da8 UI: Remove the "Autoload the model" feature 2025-05-02 16:38:28 -07:00
oobabooga 905afced1c Add a --portable flag to hide things in portable mode 2025-05-02 16:34:29 -07:00
oobabooga 3f26b0408b Fix after 9e3867dc83 2025-05-02 16:17:22 -07:00
oobabooga 9e3867dc83 llama.cpp: Fix manual random seeds 2025-05-02 09:36:15 -07:00
oobabooga d5c407cf35 Use Vulkan instead of ROCm for llama.cpp on AMD 2025-05-01 20:05:36 -07:00
oobabooga f8aaf3c23a Use ROCm 6.2.4 on AMD 2025-05-01 19:50:46 -07:00
oobabooga c12a53c998 Use turboderp's exllamav2 wheels 2025-05-01 19:46:56 -07:00
oobabooga 89090d9a61 Update README 2025-05-01 08:22:54 -07:00
oobabooga b950a0c6db Lint 2025-04-30 20:02:10 -07:00
oobabooga 307d13b540 UI: Minor label change 2025-04-30 18:58:14 -07:00
oobabooga 55283bb8f1 Fix CFG with ExLlamaV2_HF (closes #6937) 2025-04-30 18:43:45 -07:00
oobabooga ec2e641749 Update settings-template.yaml 2025-04-30 15:25:26 -07:00
oobabooga a6c3ec2299 llama.cpp: Explicitly send cache_prompt = True 2025-04-30 15:24:07 -07:00
oobabooga 195a45c6e1 UI: Make thinking blocks closed by default 2025-04-30 15:12:46 -07:00
oobabooga cd5c32dc19 UI: Fix max_updates_second not working 2025-04-30 14:54:05 -07:00
oobabooga b46ca01340 UI: Set max_updates_second to 12 by default
When the tokens/second at at ~50 and the model is a thinking model,
the markdown rendering for the streaming message becomes a CPU
bottleneck.
2025-04-30 14:53:15 -07:00
oobabooga a4bf339724 Bump llama.cpp 2025-04-30 11:13:14 -07:00
oobabooga e9569c3984 Fixes after c5fe92d152 2025-04-30 06:57:23 -07:00
oobabooga 771d3d8ed6 Fix getting the llama.cpp logprobs for Qwen3-30B-A3B 2025-04-30 06:48:32 -07:00
oobabooga 7f49e3c3ce Bump ExLlamaV3 2025-04-30 05:25:09 -07:00
oobabooga c5fe92d152 Bump llama.cpp 2025-04-30 05:24:58 -07:00
oobabooga 1dd4aedbe1 Fix the streaming_llm UI checkbox not being interactive 2025-04-29 05:28:46 -07:00
oobabooga c5fb51e5d1 Update README 2025-04-28 22:40:26 -07:00
oobabooga d10bded7f8 UI: Add an enable_thinking option to enable/disable Qwen3 thinking 2025-04-28 22:37:01 -07:00
oobabooga 1ee0acc852 llama.cpp: Make --verbose print the llama-server command 2025-04-28 15:56:25 -07:00