Commit graph

4411 commits

Author SHA1 Message Date
oobabooga e4fb2475d2 UI: Multiple small style improvements (light/dark themes) 2025-05-06 14:02:15 -07:00
oobabooga c4f36db0d8 llama.cpp: remove tfs (it doesn't get used) 2025-05-06 08:41:13 -07:00
oobabooga 05115e42ee Set top_n_sigma before temperature by default 2025-05-06 08:27:21 -07:00
oobabooga 1927afe894 Fix top_n_sigma not showing for llama.cpp 2025-05-06 08:18:49 -07:00
oobabooga 605cc9ab14 Update exllamav3 2025-05-06 06:43:35 -07:00
oobabooga 89590adc14 Update llama.cpp 2025-05-06 06:41:17 -07:00
oobabooga d1c0154d66 llama.cpp: Add top_n_sigma, fix typical_p in sampler priority 2025-05-06 06:38:39 -07:00
oobabooga cbef35054c UI: CSS fix 2025-05-05 17:46:09 -07:00
Evgenii Novikov 4e8f628d3c
docker: App uid typo in other docker composes (#6958) 2025-05-05 20:05:15 -03:00
oobabooga 530223bf0b UI: Fix the hover menu colors 2025-05-05 16:03:43 -07:00
oobabooga 76f947e3cf UI: Minor style change 2025-05-05 15:58:29 -07:00
Alireza Ghasemi 99bd66445f
SuperboogaV2: minor update to avoid json serialization errors #6945 2025-05-05 19:04:06 -03:00
Evgenii Novikov 987505ead3
docker: Fix app uid typo in cpu docker compose (#6957) 2025-05-05 19:03:33 -03:00
oobabooga 941e0663da Update README 2025-05-05 14:18:16 -07:00
oobabooga f82667f0b4 Remove more multimodal extension references 2025-05-05 14:17:00 -07:00
oobabooga 85bf2e15b9 API: Remove obsolete multimodal extension handling
Multimodal support will be added back once it's implemented in llama-server.
2025-05-05 14:14:48 -07:00
mamei16 8137eb8ef4
Dynamic Chat Message UI Update Speed (#6952) 2025-05-05 18:05:23 -03:00
oobabooga 53d8e46502 Ensure environment isolation in portable installs 2025-05-05 12:28:17 -07:00
oobabooga bf5290bc0f Fix the hover menu in light theme 2025-05-05 08:04:12 -07:00
oobabooga 967b70327e Light theme improvement 2025-05-05 07:59:02 -07:00
oobabooga 6001d279c6 Light theme improvement 2025-05-05 07:42:13 -07:00
oobabooga 475e012ee8 UI: Improve the light theme colors 2025-05-05 06:16:29 -07:00
oobabooga b817bb33fd Minor fix after df7bb0db1f 2025-05-05 05:00:20 -07:00
oobabooga f3da45f65d ExLlamaV3_HF: Change max_chunk_size to 256 2025-05-04 20:37:15 -07:00
oobabooga df7bb0db1f Rename --n-gpu-layers to --gpu-layers 2025-05-04 20:03:55 -07:00
oobabooga d0211afb3c Save the chat history right after sending a message 2025-05-04 18:52:01 -07:00
oobabooga 2da197bba4 Refinement after previous commit 2025-05-04 18:29:05 -07:00
oobabooga 690d693913 UI: Add padding to only show the last message/reply after sending a message
To avoid scrolling
2025-05-04 18:13:29 -07:00
oobabooga d9da16edba UI: Remove the chat input textarea border 2025-05-04 16:53:52 -07:00
oobabooga 84ab1f95be UI: Increase the chat area a bit 2025-05-04 15:21:52 -07:00
oobabooga d186621926 UI: Fixes after previous commit 2025-05-04 15:19:46 -07:00
oobabooga 7853fb1c8d
Optimize the Chat tab (#6948) 2025-05-04 18:58:37 -03:00
oobabooga b7a5c7db8d llama.cpp: Handle short arguments in --extra-flags 2025-05-04 07:14:42 -07:00
oobabooga 5f5569e9ac Update README 2025-05-04 06:20:36 -07:00
oobabooga 4c2e3b168b llama.cpp: Add a retry mechanism when getting the logits (sometimes it fails) 2025-05-03 06:51:20 -07:00
oobabooga ea60f14674 UI: Show the list of files if the user tries to download a GGUF repository 2025-05-03 06:06:50 -07:00
oobabooga b71ef50e9d UI: Add a min-height to prevent constant scrolling during chat streaming 2025-05-02 23:45:58 -07:00
oobabooga b21bd8bb1e UI: Invert user/assistant message colors in instruct mode
The goal is to make assistant messages more readable.
2025-05-02 22:43:33 -07:00
oobabooga d08acb4af9 UI: Rename enable_thinking -> Enable thinking 2025-05-02 20:50:52 -07:00
oobabooga 3526b7923c Remove extensions with requirements from portable builds 2025-05-02 17:40:53 -07:00
oobabooga 4cea720da8 UI: Remove the "Autoload the model" feature 2025-05-02 16:38:28 -07:00
oobabooga 905afced1c Add a --portable flag to hide things in portable mode 2025-05-02 16:34:29 -07:00
oobabooga 3f26b0408b Fix after 9e3867dc83 2025-05-02 16:17:22 -07:00
oobabooga 9e3867dc83 llama.cpp: Fix manual random seeds 2025-05-02 09:36:15 -07:00
oobabooga d5c407cf35 Use Vulkan instead of ROCm for llama.cpp on AMD 2025-05-01 20:05:36 -07:00
oobabooga f8aaf3c23a Use ROCm 6.2.4 on AMD 2025-05-01 19:50:46 -07:00
oobabooga c12a53c998 Use turboderp's exllamav2 wheels 2025-05-01 19:46:56 -07:00
oobabooga 89090d9a61 Update README 2025-05-01 08:22:54 -07:00
oobabooga b950a0c6db Lint 2025-04-30 20:02:10 -07:00
oobabooga 307d13b540 UI: Minor label change 2025-04-30 18:58:14 -07:00