Commit graph

1988 commits

Author SHA1 Message Date
oobabooga e5b8d4d072 Fix a typo 2025-08-05 15:52:56 -07:00
oobabooga 701048cf33 Try to avoid breaking jinja2 parsing for older models 2025-08-05 15:51:24 -07:00
oobabooga 7d98ca6195 Make web search functional with thinking models 2025-08-05 15:44:33 -07:00
oobabooga 0e42575c57 Fix thinking block parsing for GPT-OSS under llama.cpp 2025-08-05 15:36:20 -07:00
oobabooga 498778b8ac Add a new 'Reasoning effort' UI element 2025-08-05 15:19:11 -07:00
oobabooga 6bb8212731 Fix thinking block rendering for GPT-OSS 2025-08-05 15:06:22 -07:00
oobabooga 5c5a4dfc14 Fix impersonate 2025-08-05 13:04:10 -07:00
oobabooga ecd16d6bf9 Automatically set skip_special_tokens to False for channel-based templates 2025-08-05 12:57:49 -07:00
oobabooga 178c3e75cc Handle templates with channels separately 2025-08-05 12:52:17 -07:00
oobabooga 9f28f53cfc Better parsing of the gpt-oss template 2025-08-05 11:56:00 -07:00
oobabooga 3b28dc1821 Don't pass torch_dtype to transformers loader, let it be autodetected 2025-08-05 11:35:53 -07:00
oobabooga 3039aeffeb Fix parsing the gpt-oss-20b template 2025-08-05 11:35:17 -07:00
oobabooga 5989043537 Transformers: Support standalone .jinja chat templates (for GPT-OSS) 2025-08-05 11:22:18 -07:00
oobabooga f08bb9a201 Handle edge case in chat history loading (closes #7155) 2025-07-24 10:34:59 -07:00
oobabooga d746484521 Handle both int and str types in grammar char processing 2025-07-23 11:52:51 -07:00
oobabooga 0c667de7a7 UI: Add a None option for the speculative decoding model (closes #7145) 2025-07-19 12:14:41 -07:00
oobabooga 845432b9b4 Remove the obsolete modules/relative_imports.py file 2025-07-14 21:03:18 -07:00
oobabooga 1d1b20bd77 Remove the --torch-compile option (it doesn't do anything currently) 2025-07-11 10:51:23 -07:00
oobabooga 273888f218 Revert "Use eager attention by default instead of sdpa"
This reverts commit bd4881c4dc.
2025-07-10 18:56:46 -07:00
oobabooga 635e6efd18 Ignore add_bos_token in instruct prompts, let the jinja2 template decide 2025-07-10 07:14:01 -07:00
oobabooga e015355e4a Update README 2025-07-09 20:03:53 -07:00
oobabooga bd4881c4dc Use eager attention by default instead of sdpa 2025-07-09 19:57:37 -07:00
oobabooga b69f435311 Fix latest transformers being super slow 2025-07-09 19:56:50 -07:00
oobabooga 6c2bdda0f0 Transformers loader: replace use_flash_attention_2/use_eager_attention with a unified attn_implementation
Closes #7107
2025-07-09 18:39:37 -07:00
oobabooga 07e6f004c5 Rename a button in the Session tab for clarity 2025-07-07 11:28:47 -07:00
Alidr79 e5767d4fc5
Update ui_model_menu.py blocking the --multi-user access in backend (#7098) 2025-07-06 21:48:53 -03:00
oobabooga 60123a67ac Better log message when extension requirements are not found 2025-07-06 17:44:41 -07:00
oobabooga e6bc7742fb Support installing user extensions in user_data/extensions/ 2025-07-06 17:30:23 -07:00
Philipp Claßen 959d4ddb91
Fix for chat sidebars toggle buttons disappearing (#7106) 2025-07-06 20:51:42 -03:00
oobabooga de4ccffff8 Fix the duckduckgo search 2025-07-06 16:24:57 -07:00
oobabooga 92ec8dda03 Fix chat history getting lost if the UI is inactive for a long time (closes #7109) 2025-07-04 06:04:04 -07:00
zombiegreedo 877c651c04
Handle either missing <think> start or </think> end tags (#7102) 2025-07-03 23:05:46 -03:00
oobabooga c3faecfd27 Minor change 2025-06-22 17:51:09 -07:00
oobabooga 1b19dd77a4 Move 'Enable thinking' to the Chat tab 2025-06-22 17:29:17 -07:00
oobabooga 02f604479d Remove the pre-jinja2 custom stopping string handling (closes #7094) 2025-06-21 14:03:35 -07:00
oobabooga 58282f7107 Replace 'Generate' with 'Send' in the Chat tab 2025-06-20 06:59:48 -07:00
oobabooga acd57b6a85 Minor UI change 2025-06-19 15:39:43 -07:00
oobabooga f08db63fbc Change some comments 2025-06-19 15:26:45 -07:00
oobabooga a1b606a6ac Fix obtaining the maximum number of GPU layers for DeepSeek-R1-0528-GGUF 2025-06-19 12:30:57 -07:00
oobabooga 3344510553 Force dark theme on the Gradio login page 2025-06-19 12:11:34 -07:00
oobabooga 645463b9f0 Add fallback values for theme colors 2025-06-19 11:28:12 -07:00
oobabooga 9c6913ad61 Show file sizes on "Get file list" 2025-06-18 21:35:07 -07:00
oobabooga 0cb82483ef Lint 2025-06-18 18:26:59 -07:00
oobabooga 6cc7bbf009 Better autosave behavior for notebook tab when there are 2 columns 2025-06-18 15:54:32 -07:00
oobabooga 197b327374 Minor log message change 2025-06-18 13:36:54 -07:00
oobabooga 2f45d75309 Increase the area of the notebook textbox 2025-06-18 13:22:06 -07:00
oobabooga 7cb2b1bfdb Fix some events 2025-06-18 10:27:38 -07:00
oobabooga 22cc9e0115 Remove 'Send to Default' 2025-06-18 10:21:48 -07:00
oobabooga 678f40297b Clear the default tab output when switching prompts 2025-06-17 17:40:48 -07:00
oobabooga da148232eb Better filenames for new prompts in the Notebook tab 2025-06-17 15:10:44 -07:00
oobabooga fc23345c6d Send the default input to the notebook textbox when switching 2 columns to 1 (instead of the output) 2025-06-17 15:03:14 -07:00
oobabooga aa44e542cb Revert "Safer usage of mkdir across the project"
This reverts commit 0d1597616f.
2025-06-17 07:11:59 -07:00
oobabooga 0d1597616f Safer usage of mkdir across the project 2025-06-17 07:09:33 -07:00
oobabooga 66e991841a Fix the character pfp not appearing when switching from instruct to chat modes 2025-06-16 18:45:44 -07:00
oobabooga be3d371290 Close the big profile picture when switching to instruct mode 2025-06-16 18:42:17 -07:00
oobabooga 26eda537f0 Add auto-save for notebook textbox while typing 2025-06-16 17:48:23 -07:00
oobabooga 88c0204357 Disable start_with when generating the websearch query 2025-06-16 14:53:05 -07:00
oobabooga faae4dc1b0
Autosave generated text in the Notebook tab (#7079) 2025-06-16 17:36:05 -03:00
oobabooga de24b3bb31
Merge the Default and Notebook tabs into a single Notebook tab (#7078) 2025-06-16 13:19:29 -03:00
oobabooga cac225b589 Small style improvements 2025-06-16 07:26:39 -07:00
oobabooga 7ba3d4425f Remove the 'Send to negative prompt' button 2025-06-16 07:23:09 -07:00
oobabooga 34bf93ef47 Move 'Custom system message' to the Parameters tab 2025-06-16 07:22:14 -07:00
oobabooga c9c3b716fb Move character settings to a new 'Character' main tab 2025-06-16 07:21:25 -07:00
oobabooga f77f1504f5 Improve the style of the Character and User tabs 2025-06-16 06:12:37 -07:00
oobabooga bc2b0f54e9 Only save extensions settings on manual save 2025-06-15 15:53:16 -07:00
oobabooga 609c3ac893 Optimize the end of generation with llama.cpp 2025-06-15 08:03:27 -07:00
oobabooga db7d717df7 Remove images and links from websearch results
This reduces noise a lot
2025-06-14 20:00:25 -07:00
oobabooga e263dbf852 Improve user input truncation 2025-06-14 19:43:51 -07:00
oobabooga 09606a38d3 Truncate web search results to at most 8192 tokens 2025-06-14 19:37:32 -07:00
oobabooga 8e9c0287aa UI: Fix edge case where gpu-layers slider maximum is incorrectly limited 2025-06-14 10:12:11 -07:00
oobabooga d2da40b0e4 Remember the last selected chat for each mode/character 2025-06-14 08:25:00 -07:00
oobabooga 879fa3d8c4 Improve the wpp style & simplify the code 2025-06-14 07:14:22 -07:00
oobabooga 9a2353f97b Better log message when the user input gets truncated 2025-06-13 05:44:02 -07:00
Miriam f4f621b215
ensure estimated vram is updated when switching between different models (#7071) 2025-06-13 02:56:33 -03:00
oobabooga f337767f36 Add error handling for non-llama.cpp models in portable mode 2025-06-12 22:17:39 -07:00
oobabooga 2dee3a66ff Add an option to include/exclude attachments from previous messages in the chat prompt 2025-06-12 21:37:18 -07:00
oobabooga 004fd8316c Minor changes 2025-06-11 07:49:51 -07:00
oobabooga 570d5b8936 Only save extensions on manual save 2025-06-11 07:39:49 -07:00
oobabooga 27140f3563 Revert "Don't save active extensions through the UI"
This reverts commit df98f4b331.
2025-06-11 07:25:27 -07:00
LawnMauer bc921c66e5
Load js and css sources in UTF-8 (#7059) 2025-06-10 22:16:50 -03:00
oobabooga 75da90190f Fix character dropdown sometimes disappearing in the Parameters tab 2025-06-10 17:34:54 -07:00
oobabooga 1c1fd3be46 Remove some log messages 2025-06-10 14:29:28 -07:00
oobabooga 3f9eb3aad1 Fix the preset dropdown when the default preset file is not present 2025-06-10 14:22:37 -07:00
oobabooga 18bd78f1f0 Make the llama.cpp prompt processing messages shorter 2025-06-10 14:03:25 -07:00
oobabooga 889153952f Lint 2025-06-10 09:02:52 -07:00
oobabooga c92eba0b0a Reorganize the Parameters tab (left: preset parameters, right: everything else) 2025-06-09 22:05:20 -07:00
oobabooga efd9c9707b Fix random seeds being saved to settings.yaml 2025-06-09 20:57:25 -07:00
oobabooga df98f4b331 Don't save active extensions through the UI
Prevents command-line activated extensions from becoming permanently active due to autosave.
2025-06-09 20:28:16 -07:00
Mykeehu ec73121020
Fix continue/start reply with when using translation extensions (#6944)
---------

Co-authored-by: oobabooga <oobabooga4@gmail.com>
2025-06-10 00:17:05 -03:00
Miriam 1443612e72
check .attention.head_count if .attention.head_count_kv doesn't exist (#7048) 2025-06-09 23:22:01 -03:00
oobabooga 263b5d5557 Use html2text to extract the text of web searches without losing formatting 2025-06-09 17:55:26 -07:00
oobabooga f5a5d0c0cb Add the URL of web attachments to the prompt 2025-06-09 17:32:25 -07:00
oobabooga eefbf96f6a Don't save truncation_length to user_data/settings.yaml 2025-06-08 22:14:56 -07:00
oobabooga f9a007c6a8 Properly filter out failed web search downloads from attachments 2025-06-08 19:25:23 -07:00
oobabooga f3388c2ab4 Fix selecting next chat when deleting with active search 2025-06-08 18:53:04 -07:00
oobabooga 4a369e070a Add buttons for easily deleting past chats 2025-06-08 18:47:48 -07:00
oobabooga 0b8d2d65a2 Minor style improvement 2025-06-08 18:11:27 -07:00
oobabooga f81b1540ca Small style improvements 2025-06-08 15:19:25 -07:00
oobabooga eb0ab9db1d Fix light/dark theme persistence across page reloads 2025-06-08 15:04:05 -07:00
oobabooga 1f1435997a Don't show the new 'Restore character' button in the Chat tab 2025-06-08 09:37:54 -07:00
oobabooga 84f66484c5 Make it optional to paste long pasted content to an attachment 2025-06-08 09:31:38 -07:00
oobabooga 42e7864d62 Reorganize the Session tab 2025-06-08 09:21:23 -07:00
oobabooga af6bb7513a Add back the "Save UI defaults" button
It's useful for saving extensions settings.
2025-06-08 09:09:36 -07:00
oobabooga 1bdf11b511 Use the Qwen3 - Thinking preset by default 2025-06-07 22:23:09 -07:00
oobabooga fe955cac1f Small UI changes 2025-06-07 22:15:19 -07:00
oobabooga caf9fca5f3 Avoid some code repetition 2025-06-07 22:11:35 -07:00
oobabooga 3650a6fd1f Small UI changes 2025-06-07 22:02:34 -07:00
oobabooga 6436bf1920
More UI persistence: presets and characters (#7051) 2025-06-08 01:58:02 -03:00
oobabooga 35ed55d18f
UI persistence (#7050) 2025-06-07 22:46:52 -03:00
oobabooga 2d263f227d Fix the chat input reappearing when the page is reloaded 2025-06-06 22:38:20 -07:00
oobabooga 379dd01ca7 Filter out failed web search downloads from attachments 2025-06-06 22:32:07 -07:00
oobabooga f8f23b5489 Simplify the llama.cpp stderr filter code 2025-06-06 22:25:13 -07:00
oobabooga 45f823ddf6 Print \n after the llama.cpp progress bar reaches 1.0 2025-06-06 22:23:34 -07:00
oobabooga d47c8eb956 Remove quotes from LLM-generated websearch query (closes #7045).
Fix by @Quiet-Joker
2025-06-05 06:57:59 -07:00
oobabooga 93b3752cdf Revert "Remove the "Is typing..." yield by default"
This reverts commit b30a73016d.
2025-06-04 09:40:30 -07:00
oobabooga b30a73016d Remove the "Is typing..." yield by default 2025-06-02 07:49:22 -07:00
oobabooga bb409c926e
Update only the last message during streaming + add back dynamic UI update speed (#7038) 2025-06-02 09:50:17 -03:00
oobabooga 2db7745cbd Show llama.cpp prompt processing on one line instead of many lines 2025-06-01 22:12:24 -07:00
oobabooga ad6d0218ae Fix after 219f0a7731 2025-06-01 19:27:14 -07:00
oobabooga 92adceb7b5 UI: Fix the model downloader progress bar 2025-06-01 19:22:21 -07:00
oobabooga 9e80193008 Add the model name to each message's metadata 2025-05-31 22:41:35 -07:00
oobabooga 98a7508a99 UI: Move 'Show controls' inside the hover menu 2025-05-31 22:22:13 -07:00
oobabooga f8d220c1e6 Add a tooltip to the web search checkbox 2025-05-31 21:22:36 -07:00
oobabooga 1d88456659 Add support for .docx attachments 2025-05-31 20:15:07 -07:00
oobabooga 219f0a7731 Fix exllamav3_hf models failing to unload (closes #7031) 2025-05-30 12:05:49 -07:00
oobabooga 298d4719c6 Multiple small style improvements 2025-05-30 11:32:24 -07:00
oobabooga 7c29879e79 Fix 'Start reply with' (closes #7033) 2025-05-30 11:17:47 -07:00
oobabooga acbcc12e7b Clean up 2025-05-29 14:11:21 -07:00
oobabooga dce02732a4 Fix timestamp issues when editing/swiping messages 2025-05-29 14:08:48 -07:00
oobabooga f59998d268 Don't limit the number of prompt characters printed with --verbose 2025-05-29 13:08:48 -07:00
oobabooga 724147ffab Better detect when no model is available 2025-05-29 10:49:29 -07:00
oobabooga faa5c82c64 Fix message version count not updating during regeneration streaming 2025-05-29 09:16:26 -07:00
Underscore 63234b9b6f
UI: Fix impersonate (#7025) 2025-05-29 08:22:03 -03:00
oobabooga 75d6cfd14d Download fetched web search results in parallel 2025-05-28 20:36:24 -07:00
oobabooga 7080a02252 Reduce the timeout for downloading web pages 2025-05-28 18:15:21 -07:00
oobabooga 3eb0b77427 Improve the web search query generation 2025-05-28 18:14:51 -07:00
oobabooga 27641ac182 UI: Make message editing work the same for user and assistant messages 2025-05-28 17:23:46 -07:00
oobabooga 6c3590ba9a Make web search attachments clickable 2025-05-28 05:28:15 -07:00
oobabooga 077bbc6b10
Add web search support (#7023) 2025-05-28 04:27:28 -03:00
oobabooga 1b0e2d8750 UI: Add a token counter to the chat tab (counts input + history) 2025-05-27 22:36:24 -07:00
oobabooga f6ca0ee072 Fix regenerate sometimes not creating a new message version 2025-05-27 21:20:51 -07:00
Underscore 5028480eba
UI: Add footer buttons for editing messages (#7019)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2025-05-28 00:55:27 -03:00
Underscore 355b5f6c8b
UI: Add message version navigation (#6947)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2025-05-27 22:54:18 -03:00
Underscore 8531100109
Fix textbox text usage in methods (#7009) 2025-05-26 22:40:09 -03:00
oobabooga bae1aa34aa Fix loading Llama-3_3-Nemotron-Super-49B-v1 and similar models (closes #7012) 2025-05-25 17:19:26 -07:00
oobabooga 8620d6ffe7 Make it possible to upload multiple text files/pdfs at once 2025-05-20 21:34:07 -07:00
oobabooga cc8a4fdcb1 Minor improvement to attachments prompt format 2025-05-20 21:31:18 -07:00
oobabooga 409a48d6bd
Add attachments support (text files, PDF documents) (#7005) 2025-05-21 00:36:20 -03:00
oobabooga 5d00574a56 Minor UI fixes 2025-05-20 16:20:49 -07:00
oobabooga 616ea6966d
Store previous reply versions on regenerate (#7004) 2025-05-20 12:51:28 -03:00
Daniel Dengler c25a381540
Add a "Branch here" footer button to chat messages (#6967) 2025-05-20 11:07:40 -03:00
oobabooga 8e10f9894a
Add a metadata field to the chat history & add date/time to chat messages (#7003) 2025-05-20 10:48:46 -03:00
oobabooga 9ec46b8c44 Remove the HQQ loader (HQQ models can be loaded through Transformers) 2025-05-19 09:23:24 -07:00
oobabooga 126b3a768f Revert "Dynamic Chat Message UI Update Speed (#6952)" (for now)
This reverts commit 8137eb8ef4.
2025-05-18 12:38:36 -07:00
oobabooga 2faaf18f1f Add back the "Common values" to the ctx-size slider 2025-05-18 09:06:20 -07:00
oobabooga f1ec6c8662 Minor label changes 2025-05-18 09:04:51 -07:00
oobabooga 61276f6a37 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-05-17 07:22:51 -07:00
oobabooga 4800d1d522 More robust VRAM calculation 2025-05-17 07:20:38 -07:00
mamei16 052c82b664
Fix KeyError: 'gpu_layers' when loading existing model settings (#6991) 2025-05-17 11:19:13 -03:00
oobabooga 0f77ff9670 UI: Use total VRAM (not free) for layers calculation when a model is loaded 2025-05-16 19:19:22 -07:00
oobabooga c0e295dd1d Remove the 'None' option from the model menu 2025-05-16 17:53:20 -07:00
oobabooga e3bba510d4 UI: Only add a blank space to streaming messages in instruct mode 2025-05-16 17:49:17 -07:00
oobabooga 71fa046c17 Minor changes after 1c549d176b 2025-05-16 17:38:08 -07:00
oobabooga d99fb0a22a Add backward compatibility with saved n_gpu_layers values 2025-05-16 17:29:18 -07:00
oobabooga 1c549d176b Fix GPU layers slider: honor saved settings and show true maximum 2025-05-16 17:26:13 -07:00
oobabooga e4d3f4449d API: Fix a regression 2025-05-16 13:02:27 -07:00
oobabooga adb975a380 Prevent fractional gpu-layers in the UI 2025-05-16 12:52:43 -07:00
oobabooga fc483650b5 Set the maximum gpu_layers value automatically when the model is loaded with --model 2025-05-16 11:58:17 -07:00
oobabooga 38c50087fe Prevent a crash on systems without an NVIDIA GPU 2025-05-16 11:55:30 -07:00
oobabooga 253e85a519 Only compute VRAM/GPU layers for llama.cpp models 2025-05-16 10:02:30 -07:00
oobabooga 9ec9b1bf83 Auto-adjust GPU layers after model unload to utilize freed VRAM 2025-05-16 09:56:23 -07:00
oobabooga ee7b3028ac Always cache GGUF metadata calls 2025-05-16 09:12:36 -07:00
oobabooga 4925c307cf Auto-adjust GPU layers on context size and cache type changes + many fixes 2025-05-16 09:07:38 -07:00
oobabooga 93e1850a2c Only show the VRAM info for llama.cpp 2025-05-15 21:42:15 -07:00
oobabooga cbf4daf1c8 Hide the LoRA menu in portable mode 2025-05-15 21:21:54 -07:00
oobabooga fd61297933 Lint 2025-05-15 21:19:19 -07:00
oobabooga 5534d01da0
Estimate the VRAM for GGUF models + autoset gpu-layers (#6980) 2025-05-16 00:07:37 -03:00
oobabooga c4a715fd1e UI: Move the LoRA menu under "Other options" 2025-05-13 20:14:09 -07:00
oobabooga 035cd3e2a9 UI: Hide the extension install menu in portable builds 2025-05-13 20:09:22 -07:00
oobabooga 2826c60044 Use logger for "Output generated in ..." messages 2025-05-13 14:45:46 -07:00
oobabooga 3fa1a899ae UI: Fix gpu-layers being ignored (closes #6973) 2025-05-13 12:07:59 -07:00
oobabooga 62c774bf24 Revert "New attempt"
This reverts commit e7ac06c169.
2025-05-13 06:42:25 -07:00
oobabooga e7ac06c169 New attempt 2025-05-10 19:20:04 -07:00
oobabooga 47d4758509 Fix #6970 2025-05-10 17:46:00 -07:00
oobabooga 4920981b14 UI: Remove the typing cursor 2025-05-09 20:35:38 -07:00
oobabooga 8984e95c67 UI: More friendly message when no model is loaded 2025-05-09 07:21:05 -07:00
oobabooga 512bc2d0e0 UI: Update some labels 2025-05-08 23:43:55 -07:00
oobabooga f8ef6e09af UI: Make ctx-size a slider 2025-05-08 18:19:04 -07:00
oobabooga 9ea2a69210 llama.cpp: Add --no-webui to the llama-server command 2025-05-08 10:41:25 -07:00
oobabooga 1c7209a725 Save the chat history periodically during streaming 2025-05-08 09:46:43 -07:00
Jonas fa960496d5
Tools support for OpenAI compatible API (#6827) 2025-05-08 12:30:27 -03:00
oobabooga a2ab42d390 UI: Remove the exllamav2 info message 2025-05-08 08:00:38 -07:00
oobabooga 348d4860c2 UI: Create a "Main options" section in the Model tab 2025-05-08 07:58:59 -07:00
oobabooga d2bae7694c UI: Change the ctx-size description 2025-05-08 07:26:23 -07:00
oobabooga b28fa86db6 Default --gpu-layers to 256 2025-05-06 17:51:55 -07:00
Downtown-Case 5ef564a22e
Fix model config loading in shared.py for Python 3.13 (#6961) 2025-05-06 17:03:33 -03:00
oobabooga c4f36db0d8 llama.cpp: remove tfs (it doesn't get used) 2025-05-06 08:41:13 -07:00
oobabooga 05115e42ee Set top_n_sigma before temperature by default 2025-05-06 08:27:21 -07:00
oobabooga 1927afe894 Fix top_n_sigma not showing for llama.cpp 2025-05-06 08:18:49 -07:00
oobabooga d1c0154d66 llama.cpp: Add top_n_sigma, fix typical_p in sampler priority 2025-05-06 06:38:39 -07:00