Commit graph

1988 commits

Author SHA1 Message Date
mamei16 8137eb8ef4
Dynamic Chat Message UI Update Speed (#6952) 2025-05-05 18:05:23 -03:00
oobabooga 475e012ee8 UI: Improve the light theme colors 2025-05-05 06:16:29 -07:00
oobabooga b817bb33fd Minor fix after df7bb0db1f 2025-05-05 05:00:20 -07:00
oobabooga f3da45f65d ExLlamaV3_HF: Change max_chunk_size to 256 2025-05-04 20:37:15 -07:00
oobabooga df7bb0db1f Rename --n-gpu-layers to --gpu-layers 2025-05-04 20:03:55 -07:00
oobabooga d0211afb3c Save the chat history right after sending a message 2025-05-04 18:52:01 -07:00
oobabooga 690d693913 UI: Add padding to only show the last message/reply after sending a message
To avoid scrolling
2025-05-04 18:13:29 -07:00
oobabooga 7853fb1c8d
Optimize the Chat tab (#6948) 2025-05-04 18:58:37 -03:00
oobabooga b7a5c7db8d llama.cpp: Handle short arguments in --extra-flags 2025-05-04 07:14:42 -07:00
oobabooga 4c2e3b168b llama.cpp: Add a retry mechanism when getting the logits (sometimes it fails) 2025-05-03 06:51:20 -07:00
oobabooga ea60f14674 UI: Show the list of files if the user tries to download a GGUF repository 2025-05-03 06:06:50 -07:00
oobabooga b71ef50e9d UI: Add a min-height to prevent constant scrolling during chat streaming 2025-05-02 23:45:58 -07:00
oobabooga d08acb4af9 UI: Rename enable_thinking -> Enable thinking 2025-05-02 20:50:52 -07:00
oobabooga 4cea720da8 UI: Remove the "Autoload the model" feature 2025-05-02 16:38:28 -07:00
oobabooga 905afced1c Add a --portable flag to hide things in portable mode 2025-05-02 16:34:29 -07:00
oobabooga 3f26b0408b Fix after 9e3867dc83 2025-05-02 16:17:22 -07:00
oobabooga 9e3867dc83 llama.cpp: Fix manual random seeds 2025-05-02 09:36:15 -07:00
oobabooga b950a0c6db Lint 2025-04-30 20:02:10 -07:00
oobabooga 307d13b540 UI: Minor label change 2025-04-30 18:58:14 -07:00
oobabooga 55283bb8f1 Fix CFG with ExLlamaV2_HF (closes #6937) 2025-04-30 18:43:45 -07:00
oobabooga a6c3ec2299 llama.cpp: Explicitly send cache_prompt = True 2025-04-30 15:24:07 -07:00
oobabooga 195a45c6e1 UI: Make thinking blocks closed by default 2025-04-30 15:12:46 -07:00
oobabooga cd5c32dc19 UI: Fix max_updates_second not working 2025-04-30 14:54:05 -07:00
oobabooga b46ca01340 UI: Set max_updates_second to 12 by default
When the tokens/second at at ~50 and the model is a thinking model,
the markdown rendering for the streaming message becomes a CPU
bottleneck.
2025-04-30 14:53:15 -07:00
oobabooga 771d3d8ed6 Fix getting the llama.cpp logprobs for Qwen3-30B-A3B 2025-04-30 06:48:32 -07:00
oobabooga 1dd4aedbe1 Fix the streaming_llm UI checkbox not being interactive 2025-04-29 05:28:46 -07:00
oobabooga d10bded7f8 UI: Add an enable_thinking option to enable/disable Qwen3 thinking 2025-04-28 22:37:01 -07:00
oobabooga 1ee0acc852 llama.cpp: Make --verbose print the llama-server command 2025-04-28 15:56:25 -07:00
oobabooga 15a29e99f8 Lint 2025-04-27 21:41:34 -07:00
oobabooga be13f5199b UI: Add an info message about how to use Speculative Decoding 2025-04-27 21:40:38 -07:00
oobabooga c6c2855c80 llama.cpp: Remove the timeout while loading models (closes #6907) 2025-04-27 21:22:21 -07:00
oobabooga ee0592473c Fix ExLlamaV3_HF leaking memory (attempt) 2025-04-27 21:04:02 -07:00
oobabooga 70952553c7 Lint 2025-04-26 19:29:08 -07:00
oobabooga 7b80acd524 Fix parsing --extra-flags 2025-04-26 18:40:03 -07:00
oobabooga 943451284f Fix the Notebook tab not loading its default prompt 2025-04-26 18:25:06 -07:00
oobabooga 511eb6aa94 Fix saving settings to settings.yaml 2025-04-26 18:20:00 -07:00
oobabooga 8b83e6f843 Prevent Gradio from saying 'Thank you for being a Gradio user!' 2025-04-26 18:14:57 -07:00
oobabooga 4a32e1f80c UI: show draft_max for ExLlamaV2 2025-04-26 18:01:44 -07:00
oobabooga 0fe3b033d0 Fix parsing of --n_ctx and --max_seq_len (2nd attempt) 2025-04-26 17:52:21 -07:00
oobabooga c4afc0421d Fix parsing of --n_ctx and --max_seq_len 2025-04-26 17:43:53 -07:00
oobabooga 234aba1c50 llama.cpp: Simplify the prompt processing progress indicator
The progress bar was unreliable
2025-04-26 17:33:47 -07:00
oobabooga 4ff91b6588 Better default settings for Speculative Decoding 2025-04-26 17:24:40 -07:00
oobabooga bc55feaf3e Improve host header validation in local mode 2025-04-26 15:42:17 -07:00
oobabooga 3a207e7a57 Improve the --help formatting a bit 2025-04-26 07:31:04 -07:00
oobabooga 6acb0e1bee Change a UI description 2025-04-26 05:13:08 -07:00
oobabooga cbd4d967cc Update a --help message 2025-04-26 05:09:52 -07:00
oobabooga 763a7011c0 Remove an ancient/obsolete migration check 2025-04-26 04:59:05 -07:00
oobabooga d9de14d1f7
Restructure the repository (#6904) 2025-04-26 08:56:54 -03:00
oobabooga d4017fbb6d
ExLlamaV3: Add kv cache quantization (#6903) 2025-04-25 21:32:00 -03:00
oobabooga d4b1e31c49 Use --ctx-size to specify the context size for all loaders
Old flags are still recognized as alternatives.
2025-04-25 16:59:03 -07:00
oobabooga faababc4ea llama.cpp: Add a prompt processing progress bar 2025-04-25 16:42:30 -07:00
oobabooga 877cf44c08 llama.cpp: Add StreamingLLM (--streaming-llm) 2025-04-25 16:21:41 -07:00
oobabooga d35818f4e1
UI: Add a collapsible thinking block to messages with <think> steps (#6902) 2025-04-25 18:02:02 -03:00
oobabooga 98f4c694b9 llama.cpp: Add --extra-flags parameter for passing additional flags to llama-server 2025-04-25 07:32:51 -07:00
oobabooga 5861013e68 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-04-24 20:36:20 -07:00
oobabooga a90df27ff5 UI: Add a greeting when the chat history is empty 2025-04-24 20:33:40 -07:00
oobabooga ae1fe87365
ExLlamaV2: Add speculative decoding (#6899) 2025-04-25 00:11:04 -03:00
Matthew Jenkins 8f2493cc60
Prevent llamacpp defaults from locking up consumer hardware (#6870) 2025-04-24 23:38:57 -03:00
oobabooga 93fd4ad25d llama.cpp: Document the --device-draft syntax 2025-04-24 09:20:11 -07:00
oobabooga f1b64df8dd EXL2: add another torch.cuda.synchronize() call to prevent errors 2025-04-24 09:03:49 -07:00
oobabooga c71a2af5ab Handle CMD_FLAGS.txt in the main code (closes #6896) 2025-04-24 08:21:06 -07:00
oobabooga bfbde73409 Make 'instruct' the default chat mode 2025-04-24 07:08:49 -07:00
oobabooga e99c20bcb0
llama.cpp: Add speculative decoding (#6891) 2025-04-23 20:10:16 -03:00
oobabooga 9424ba17c8 UI: show only part 00001 of multipart GGUF models in the model menu 2025-04-22 19:56:42 -07:00
oobabooga 25cf3600aa Lint 2025-04-22 08:04:02 -07:00
oobabooga 39cbb5fee0 Lint 2025-04-22 08:03:25 -07:00
oobabooga 008c6dd682 Lint 2025-04-22 08:02:37 -07:00
oobabooga 78aeabca89 Fix the transformers loader 2025-04-21 18:33:14 -07:00
oobabooga 8320190184 Fix the exllamav2_HF and exllamav3_HF loaders 2025-04-21 18:32:23 -07:00
oobabooga 15989c2ed8 Make llama.cpp the default loader 2025-04-21 16:36:35 -07:00
oobabooga 86c3ed3218 Small change to the unload_model() function 2025-04-20 20:00:56 -07:00
oobabooga fe8e80e04a Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-04-20 19:09:27 -07:00
oobabooga ff1c00bdd9 llama.cpp: set the random seed manually 2025-04-20 19:08:44 -07:00
Matthew Jenkins d3e7c655e5
Add support for llama-cpp builds from https://github.com/ggml-org/llama.cpp (#6862) 2025-04-20 23:06:24 -03:00
oobabooga e243424ba1 Fix an import 2025-04-20 17:51:28 -07:00
oobabooga 8cfd7f976b Revert "Remove the old --model-menu flag"
This reverts commit 109de34e3b.
2025-04-20 13:35:42 -07:00
oobabooga b3bf7a885d Fix ExLlamaV2_HF and ExLlamaV3_HF after ae02ffc605 2025-04-20 11:32:48 -07:00
oobabooga ae02ffc605
Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
oobabooga 6ba0164c70 Lint 2025-04-19 17:45:21 -07:00
oobabooga 5ab069786b llama.cpp: add back the two encode calls (they are harmless now) 2025-04-19 17:38:36 -07:00
oobabooga b9da5c7e3a Use 127.0.0.1 instead of localhost for faster llama.cpp on Windows 2025-04-19 17:36:04 -07:00
oobabooga 9c9df2063f llama.cpp: fix unicode decoding (closes #6856) 2025-04-19 16:38:15 -07:00
oobabooga ba976d1390 llama.cpp: avoid two 'encode' calls 2025-04-19 16:35:01 -07:00
oobabooga ed42154c78 Revert "llama.cpp: close the connection immediately on 'Stop'"
This reverts commit 5fdebc554b.
2025-04-19 05:32:36 -07:00
oobabooga 5fdebc554b llama.cpp: close the connection immediately on 'Stop' 2025-04-19 04:59:24 -07:00
oobabooga 6589ebeca8 Revert "llama.cpp: new optimization attempt"
This reverts commit e2e73ed22f.
2025-04-18 21:16:21 -07:00
oobabooga e2e73ed22f llama.cpp: new optimization attempt 2025-04-18 21:05:08 -07:00
oobabooga e2e90af6cd llama.cpp: don't include --rope-freq-base in the launch command if null 2025-04-18 20:51:18 -07:00
oobabooga 9f07a1f5d7 llama.cpp: new attempt at optimizing the llama-server connection 2025-04-18 19:30:53 -07:00
oobabooga f727b4a2cc llama.cpp: close the connection properly when generation is cancelled 2025-04-18 19:01:39 -07:00
oobabooga b3342b8dd8 llama.cpp: optimize the llama-server connection 2025-04-18 18:46:36 -07:00
oobabooga 2002590536 Revert "Attempt at making the llama-server streaming more efficient."
This reverts commit 5ad080ff25.
2025-04-18 18:13:54 -07:00
oobabooga 71ae05e0a4 llama.cpp: Fix the sampler priority handling 2025-04-18 18:06:36 -07:00
oobabooga 5ad080ff25 Attempt at making the llama-server streaming more efficient. 2025-04-18 18:04:49 -07:00
oobabooga 4fabd729c9 Fix the API without streaming or without 'sampler_priority' (closes #6851) 2025-04-18 17:25:22 -07:00
oobabooga 5135523429 Fix the new llama.cpp loader failing to unload models 2025-04-18 17:10:26 -07:00
oobabooga caa6afc88b Only show 'GENERATE_PARAMS=...' in the logits endpoint if use_logits is True 2025-04-18 09:57:57 -07:00
oobabooga d00d713ace Rename get_max_context_length to get_vocabulary_size in the new llama.cpp loader 2025-04-18 08:14:15 -07:00
oobabooga c1cc65e82e Lint 2025-04-18 08:06:51 -07:00
oobabooga d68f0fbdf7 Remove obsolete references to llamacpp_HF 2025-04-18 07:46:04 -07:00
oobabooga a0abf93425 Connect --rope-freq-base to the new llama.cpp loader 2025-04-18 06:53:51 -07:00
oobabooga ef9910c767 Fix a bug after c6901aba9f 2025-04-18 06:51:28 -07:00
oobabooga 1c4a2c9a71 Make exllamav3 safer as well 2025-04-18 06:17:58 -07:00
oobabooga c6901aba9f Remove deprecation warning code 2025-04-18 06:05:47 -07:00
oobabooga 8144e1031e Remove deprecated command-line flags 2025-04-18 06:02:28 -07:00
oobabooga ae54d8faaa
New llama.cpp loader (#6846) 2025-04-18 09:59:37 -03:00
oobabooga 5c2f8d828e Fix exllamav2 generating eos randomly after previous fix 2025-04-18 05:42:38 -07:00
oobabooga 2fc58ad935 Consider files with .pt extension in the new model menu function 2025-04-17 23:10:43 -07:00
Googolplexed d78abe480b
Allow for model subfolder organization for GGUF files (#6686)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2025-04-18 02:53:59 -03:00
oobabooga ce9e2d94b1 Revert "Attempt at solving the ExLlamaV2 issue"
This reverts commit c9b3c9dfbf.
2025-04-17 22:03:21 -07:00
oobabooga 5dfab7d363 New attempt at solving the exl2 issue 2025-04-17 22:03:11 -07:00
oobabooga c9b3c9dfbf Attempt at solving the ExLlamaV2 issue 2025-04-17 21:45:15 -07:00
oobabooga 2c2d453c8c Revert "Use ExLlamaV2 (instead of the HF one) for EXL2 models for now"
This reverts commit 0ef1b8f8b4.
2025-04-17 21:31:32 -07:00
oobabooga 0ef1b8f8b4 Use ExLlamaV2 (instead of the HF one) for EXL2 models for now
It doesn't seem to have the "OverflowError" bug
2025-04-17 05:47:40 -07:00
oobabooga 682c78ea42 Add back detection of GPTQ models (closes #6841) 2025-04-11 21:00:42 -07:00
oobabooga 4ed0da74a8 Remove the obsolete 'multimodal' extension 2025-04-09 20:09:48 -07:00
oobabooga 598568b1ed Revert "UI: remove the streaming cursor"
This reverts commit 6ea0206207.
2025-04-09 16:03:14 -07:00
oobabooga 297a406e05 UI: smoother chat streaming
This removes the throttling associated to gr.Textbox that made words appears in chunks rather than one at a time
2025-04-09 16:02:37 -07:00
oobabooga 6ea0206207 UI: remove the streaming cursor 2025-04-09 14:59:34 -07:00
oobabooga 8b8d39ec4e
Add ExLlamaV3 support (#6832) 2025-04-09 00:07:08 -03:00
oobabooga bf48ec8c44 Remove an unnecessary UI message 2025-04-07 17:43:41 -07:00
oobabooga a5855c345c
Set context lengths to at most 8192 by default (to prevent out of memory errors) (#6835) 2025-04-07 21:42:33 -03:00
oobabooga 109de34e3b Remove the old --model-menu flag 2025-03-31 09:24:03 -07:00
oobabooga 758c3f15a5 Lint 2025-03-14 20:04:43 -07:00
oobabooga 5bcd2d7ad0
Add the top N-sigma sampler (#6796) 2025-03-14 16:45:11 -03:00
oobabooga 26317a4c7e Fix jinja2 error while loading c4ai-command-a-03-2025 2025-03-14 10:59:05 -07:00
Kelvie Wong 16fa9215c4
Fix OpenAI API with new param (show_after), closes #6747 (#6749)
---------

Co-authored-by: oobabooga <oobabooga4@gmail.com>
2025-02-18 12:01:30 -03:00
oobabooga dba17c40fc Make transformers 4.49 functional 2025-02-17 17:31:11 -08:00
SamAcctX f28f39792d
update deprecated deepspeed import for transformers 4.46+ (#6725) 2025-02-02 20:41:36 -03:00
oobabooga c6f2c2fd7e UI: style improvements 2025-02-02 15:34:03 -08:00
oobabooga 0360f54ae8 UI: add a "Show after" parameter (to use with DeepSeek </think>) 2025-02-02 15:30:09 -08:00
oobabooga f01cc079b9 Lint 2025-01-29 14:00:59 -08:00
oobabooga 75ff3f3815 UI: Mention common context length values 2025-01-25 08:22:23 -08:00
FP HAM 71a551a622
Add strftime_now to JINJA to sattisfy LLAMA 3.1 and 3.2 (and granite) (#6692) 2025-01-24 11:37:20 -03:00
oobabooga 0485ff20e8 Workaround for convert_to_markdown bug 2025-01-23 06:21:40 -08:00
oobabooga 39799adc47 Add a helpful error message when llama.cpp fails to load the model 2025-01-21 12:49:12 -08:00
oobabooga 5e99dded4e UI: add "Continue" and "Remove" buttons below the last chat message 2025-01-21 09:05:44 -08:00
oobabooga 0258a6f877 Fix the Google Colab notebook 2025-01-16 05:21:18 -08:00
oobabooga 1ef748fb20 Lint 2025-01-14 16:44:15 -08:00
oobabooga f843cb475b UI: update a help message 2025-01-14 08:12:51 -08:00
oobabooga c832953ff7 UI: Activate auto_max_new_tokens by default 2025-01-14 05:59:55 -08:00
Underscore 53b838d6c5
HTML: Fix quote pair RegEx matching for all quote types (#6661) 2025-01-13 18:01:50 -03:00
oobabooga c85e5e58d0 UI: move the new morphdom code to a .js file 2025-01-13 06:20:42 -08:00
oobabooga facb4155d4 Fix morphdom leaving ghost elements behind 2025-01-11 20:57:28 -08:00
oobabooga a0492ce325
Optimize syntax highlighting during chat streaming (#6655) 2025-01-11 21:14:10 -03:00
mamei16 f1797f4323
Unescape backslashes in html_output (#6648) 2025-01-11 18:39:44 -03:00
oobabooga 1b9121e5b8 Add a "refresh" button below the last message, add a missing file 2025-01-11 12:42:25 -08:00
oobabooga a5d64b586d
Add a "copy" button below each message (#6654) 2025-01-11 16:59:21 -03:00
oobabooga 3a722a36c8
Use morphdom to make chat streaming 1902381098231% faster (#6653) 2025-01-11 12:55:19 -03:00
oobabooga d2f6c0f65f Update README 2025-01-10 13:25:40 -08:00
oobabooga c393f7650d Update settings-template.yaml, organize modules/shared.py 2025-01-10 13:22:18 -08:00
oobabooga 83c426e96b
Organize internals (#6646) 2025-01-10 18:04:32 -03:00
oobabooga 7fe46764fb Improve the --help message about --tensorcores as well 2025-01-10 07:07:41 -08:00
oobabooga da6d868f58 Remove old deprecated flags (~6 months or more) 2025-01-09 16:11:46 -08:00
oobabooga f3c0f964a2 Lint 2025-01-09 13:18:23 -08:00
oobabooga 3020f2e5ec UI: improve the info message about --tensorcores 2025-01-09 12:44:03 -08:00
oobabooga c08d87b78d Make the huggingface loader more readable 2025-01-09 12:23:38 -08:00
BPplays 619265b32c
add ipv6 support to the API (#6559) 2025-01-09 10:23:44 -03:00
oobabooga 5c89068168 UI: add an info message for the new Static KV cache option 2025-01-08 17:36:30 -08:00
nclok1405 b9e2ded6d4
Added UnicodeDecodeError workaround for modules/llamacpp_model.py (#6040)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2025-01-08 21:17:31 -03:00
oobabooga 91a8a87887 Remove obsolete code 2025-01-08 15:07:21 -08:00
oobabooga 7157257c3f
Remove the AutoGPTQ loader (#6641) 2025-01-08 19:28:56 -03:00
oobabooga c0f600c887 Add a --torch-compile flag for transformers 2025-01-05 05:47:00 -08:00
oobabooga 11af199aff Add a "Static KV cache" option for transformers 2025-01-04 17:52:57 -08:00
oobabooga 3967520e71 Connect XTC, DRY, smoothing_factor, and dynatemp to ExLlamaV2 loader (non-HF) 2025-01-04 16:25:06 -08:00
oobabooga 049297fa66 UI: reduce the size of CSS sent to the UI during streaming 2025-01-04 14:09:36 -08:00
oobabooga 0e673a7a42 UI: reduce the size of HTML sent to the UI during streaming 2025-01-04 11:40:24 -08:00
mamei16 9f24885bd2
Sane handling of markdown lists (#6626) 2025-01-04 15:41:31 -03:00
oobabooga 4b3e1b3757 UI: add a "Search chats" input field 2025-01-02 18:46:40 -08:00
oobabooga b8fc9010fa UI: fix orjson.JSONDecodeError error on page reload 2025-01-02 16:57:04 -08:00
oobabooga 75f1b5ccde UI: add a "Branch chat" button 2025-01-02 16:24:18 -08:00
Petr Korolev 13c033c745
Fix CUDA error on MPS backend during API request (#6572)
---------

Co-authored-by: oobabooga <oobabooga4@gmail.com>
2025-01-02 00:06:11 -03:00
oobabooga 725639118a UI: Use a tab length of 2 for lists (rather than 4) 2025-01-01 13:53:50 -08:00
oobabooga 7b88724711
Make responses start faster by removing unnecessary cleanup calls (#6625) 2025-01-01 18:33:38 -03:00
oobabooga 64853f8509 Reapply a necessary change that I removed from #6599 (thanks @mamei16!) 2024-12-31 14:43:22 -08:00
mamei16 e953af85cd
Fix newlines in the markdown renderer (#6599)
---------

Co-authored-by: oobabooga <oobabooga4@gmail.com>
2024-12-31 01:04:02 -03:00
oobabooga 39a5c9a49c
UI organization (#6618) 2024-12-29 11:16:17 -03:00
oobabooga 0490ee620a UI: increase the threshold for a <li> to be considered long (some more) 2024-12-19 16:51:34 -08:00
oobabooga 89888bef56 UI: increase the threshold for a <li> to be considered long 2024-12-19 14:38:36 -08:00
oobabooga 2acec386fc UI: improve the streaming cursor 2024-12-19 14:08:56 -08:00
oobabooga e2fb86e5df UI: further improve the style of lists and headings 2024-12-19 13:59:24 -08:00
oobabooga c48e4622e8 UI: update a link 2024-12-18 06:28:14 -08:00
oobabooga b27f6f8915 Lint 2024-12-17 20:13:32 -08:00
oobabooga b051e2c161 UI: improve a margin for readability 2024-12-17 19:58:21 -08:00
oobabooga 60c93e0c66 UI: Set cache_type to fp16 by default 2024-12-17 19:44:20 -08:00
oobabooga ddccc0d657 UI: minor change to log messages 2024-12-17 19:39:00 -08:00
oobabooga 3030c79e8c UI: show progress while loading a model 2024-12-17 19:37:43 -08:00
Diner Burger addad3c63e
Allow more granular KV cache settings (#6561) 2024-12-17 17:43:48 -03:00
oobabooga c43ee5db11 UI: very minor color change 2024-12-17 07:59:55 -08:00
oobabooga d769618591
Improved UI (#6575) 2024-12-17 00:47:41 -03:00
oobabooga 350758f81c UI: Fix the history upload event 2024-11-19 20:34:53 -08:00
oobabooga d01293861b Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2024-11-18 10:15:36 -08:00
oobabooga 3d19746a5d UI: improve HTML rendering for lists with sub-lists 2024-11-18 10:14:09 -08:00
mefich 1c937dad72
Filter whitespaces in downloader fields in model tab (#6518) 2024-11-18 12:01:40 -03:00
PIRI e1061ba7e3
Make token bans work again on HF loaders (#6488) 2024-10-24 15:24:02 -03:00
oobabooga 2468cfd8bb Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2024-10-14 13:25:27 -07:00
oobabooga bb62e796eb Fix locally compiled llama-cpp-python failing to import 2024-10-14 13:24:13 -07:00
oobabooga c9a9f63d1b Fix llama.cpp loader not being random (thanks @reydeljuego12345) 2024-10-14 13:07:07 -07:00
PIRI 03a2e70054
Fix temperature_last when temperature not in sampler priority (#6439) 2024-10-09 11:25:14 -03:00
oobabooga 49dfa0adaf Fix the "save preset" event 2024-10-01 11:20:48 -07:00
oobabooga 93c250b9b6 Add a UI element for enable_tp 2024-10-01 11:16:15 -07:00
oobabooga cca9d6e22d Lint 2024-10-01 10:21:06 -07:00
oobabooga 4d9ce586d3 Update llama_cpp_python_hijack.py, fix llamacpp_hf 2024-09-30 14:49:21 -07:00
oobabooga bbdeed3cf4 Make sampler priority high if unspecified 2024-09-29 20:45:27 -07:00
Manuel Schmid 0f90a1b50f
Do not set value for histories in chat when --multi-user is used (#6317) 2024-09-29 01:08:55 -03:00
oobabooga c61b29b9ce Simplify the warning when flash-attn fails to import 2024-09-28 20:33:17 -07:00
oobabooga b92d7fd43e Add warnings for when AutoGPTQ, TensorRT-LLM, or HQQ are missing 2024-09-28 20:30:24 -07:00
oobabooga 7276dca933 Fix a typo 2024-09-27 20:28:17 -07:00
RandoInternetPreson 46996f6519
ExllamaV2 tensor parallelism to increase multi gpu inference speeds (#6356) 2024-09-28 00:26:03 -03:00
Philipp Emanuel Weidmann 301375834e
Exclude Top Choices (XTC): A sampler that boosts creativity, breaks writing clichés, and inhibits non-verbatim repetition (#6335) 2024-09-27 22:50:12 -03:00
oobabooga 5c918c5b2d Make it possible to sort DRY 2024-09-27 15:40:48 -07:00
oobabooga 7424f789bf
Fix the sampling monkey patch (and add more options to sampler_priority) (#6411) 2024-09-27 19:03:25 -03:00
oobabooga bba5b36d33 Don't import PEFT unless necessary 2024-09-03 19:40:53 -07:00
oobabooga c5b40eb555 llama.cpp: prevent prompt evaluation progress bar with just 1 step 2024-09-03 17:37:06 -07:00
GralchemOz 4c74c7a116
Fix UnicodeDecodeError for BPE-based Models (especially GLM-4) (#6357) 2024-09-02 23:00:59 -03:00
oobabooga fd9cb26619 UI: update the DRY parameters descriptions/order 2024-08-19 19:40:17 -07:00
oobabooga e926c03b3d Add a --tokenizer-dir command-line flag for llamacpp_HF 2024-08-06 19:41:18 -07:00
oobabooga 30b4d8c8b2 Fix Llama 3.1 template including lengthy "tools" headers 2024-07-29 11:52:17 -07:00
oobabooga 9dcff21da9 Remove unnecessary shared.previous_model_name variable 2024-07-28 18:35:11 -07:00
oobabooga 514fb2e451 Fix UI error caused by --idle-timeout 2024-07-28 18:30:06 -07:00
oobabooga 5223c009fe Minor change after previous commit 2024-07-27 23:13:34 -07:00
oobabooga 7050bb880e UI: make n_ctx/max_seq_len/truncation_length numbers rather than sliders 2024-07-27 23:11:53 -07:00
Harry 078e8c8969
Make compress_pos_emb float (#6276) 2024-07-28 03:03:19 -03:00
oobabooga ffc713f72b UI: fix multiline LaTeX equations 2024-07-27 15:36:10 -07:00
oobabooga 493f8c3242 UI: remove animation after clicking on "Stop" in the Chat tab 2024-07-27 15:22:34 -07:00
oobabooga e4d411b841 UI: fix rendering LaTeX enclosed between \[ and \] 2024-07-27 15:21:44 -07:00
oobabooga f32d26240d UI: Fix the chat "stop" event 2024-07-26 23:03:05 -07:00
oobabooga b80d5906c2 UI: fix saving characters 2024-07-25 15:09:31 -07:00
oobabooga 42e80108f5 UI: clear the markdown LRU cache when using the default/notebook tabs 2024-07-25 08:01:42 -07:00
oobabooga 7e2851e505 UI: fix "Command for chat-instruct mode" not appearing by default 2024-07-24 15:04:12 -07:00
oobabooga 947016d010 UI: make the markdown LRU cache infinite (for really long conversations) 2024-07-24 11:54:26 -07:00
oobabooga e637b702ff UI: make text between quotes colored in chat mode 2024-07-23 21:30:32 -07:00
oobabooga 1815877061 UI: fix the default character not loading correctly on startup 2024-07-23 18:48:10 -07:00
oobabooga e6181e834a Remove AutoAWQ as a standalone loader
(it works better through transformers)
2024-07-23 15:31:17 -07:00
oobabooga f18c947a86 Update the tensorcores description 2024-07-22 18:06:41 -07:00
oobabooga aa809e420e Bump llama-cpp-python to 0.2.83, add back tensorcore wheels
Also add back the progress bar patch
2024-07-22 18:05:11 -07:00
oobabooga 11bbf71aa5
Bump back llama-cpp-python (#6257) 2024-07-22 16:19:41 -03:00
oobabooga 0f53a736c1 Revert the llama-cpp-python update 2024-07-22 12:02:25 -07:00
oobabooga a687f950ba Remove the tensorcores llama.cpp wheels
They are not faster than the default wheels anymore and they use a lot of space.
2024-07-22 11:54:35 -07:00
oobabooga 017d2332ea Remove no longer necessary llama-cpp-python patch 2024-07-22 11:50:36 -07:00
oobabooga f2d802e707 UI: make Default/Notebook contents persist on page reload 2024-07-22 11:07:10 -07:00
oobabooga 8768b69a2d Lint 2024-07-21 22:08:14 -07:00
oobabooga 79e8dbe45f UI: minor optimization 2024-07-21 22:06:49 -07:00
oobabooga 7ef2414357 UI: Make the file saving dialogs more robust 2024-07-21 15:38:20 -07:00
oobabooga 423372d6e7 Organize ui_file_saving.py 2024-07-21 13:23:18 -07:00
oobabooga 17df2d7bdf UI: don't export the instruction template on "Save UI defaults to settings.yaml" 2024-07-21 10:45:01 -07:00
oobabooga d05846eae5 UI: refresh the pfp cache on handle_your_picture_change 2024-07-21 10:17:22 -07:00
oobabooga e9d4bff7d0 Update the --tensor_split description 2024-07-20 22:04:48 -07:00
oobabooga 916d1d8283 UI: improve the style of code blocks in light theme 2024-07-20 20:32:57 -07:00
oobabooga 564d8c8c0d Make alpha_value a float number 2024-07-20 20:02:54 -07:00
oobabooga 79c4d3da3d
Optimize the UI (#6251) 2024-07-21 00:01:42 -03:00
Alberto Cano a14c510afb
Customize the subpath for gradio, use with reverse proxy (#5106) 2024-07-20 19:10:39 -03:00
Vhallo a9a6d72d8c
Use gr.Number for RoPE scaling parameters (#6233)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-07-20 18:57:09 -03:00
oobabooga aa7c14a463 Use chat-instruct mode by default 2024-07-19 21:43:52 -07:00
InvectorGator 4148a9201f
Fix for MacOS users encountering model load errors (#6227)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: Invectorgator <Kudzu12gaming@outlook.com>
2024-07-13 00:04:19 -03:00
oobabooga e436d69e2b Add --no_xformers and --no_sdpa flags for ExllamaV2 2024-07-11 15:47:37 -07:00
oobabooga 512b311137 Improve the llama-cpp-python exception messages 2024-07-11 13:00:29 -07:00
oobabooga f957b17d18 UI: update an obsolete message 2024-07-10 06:01:36 -07:00
oobabooga c176244327 UI: Move cache_8bit/cache_4bit further up 2024-07-05 12:16:21 -07:00
oobabooga aa653e3b5a Prevent llama.cpp from being monkey patched more than once (closes #6201) 2024-07-05 03:34:15 -07:00
oobabooga a210e61df1 UI: Fix broken chat histories not showing (closes #6196) 2024-07-04 20:31:25 -07:00
oobabooga e79e7b90dc UI: Move the cache_8bit and cache_4bit elements up 2024-07-04 20:21:28 -07:00
oobabooga 8b44d7b12a Lint 2024-07-04 20:16:44 -07:00
oobabooga a47de06088 Force only 1 llama-cpp-python version at a time for now 2024-07-04 19:43:34 -07:00
oobabooga f243b4ca9c Make llama-cpp-python not crash immediately 2024-07-04 19:16:00 -07:00
oobabooga 907137a13d Automatically set bf16 & use_eager_attention for Gemma-2 2024-07-01 21:46:35 -07:00
GralchemOz 8a39f579d8
transformers: Add eager attention option to make Gemma-2 work properly (#6188) 2024-07-01 12:08:08 -03:00
oobabooga ed01322763 Obtain the EOT token from the jinja template (attempt)
To use as a stopping string.
2024-06-30 15:09:22 -07:00
oobabooga 4ea260098f llama.cpp: add 4-bit/8-bit kv cache options 2024-06-29 09:10:33 -07:00
oobabooga 220c1797fc UI: do not show the "save character" button in the Chat tab 2024-06-28 22:11:31 -07:00
oobabooga 8803ae1845 UI: decrease the number of lines for "Command for chat-instruct mode" 2024-06-28 21:41:30 -07:00
oobabooga 5c6b9c610d
UI: allow the character dropdown to coexist in the Chat tab and the Parameters tab (#6177) 2024-06-29 01:20:27 -03:00
oobabooga de69a62004 Revert "UI: move "Character" dropdown to the main Chat tab"
This reverts commit 83534798b2.
2024-06-28 15:38:11 -07:00
oobabooga 38d58764db UI: remove unused gr.State variable from the Default tab 2024-06-28 15:17:44 -07:00
oobabooga da196707cf UI: improve the light theme a bit 2024-06-27 21:05:38 -07:00
oobabooga 9dbcb1aeea Small fix to make transformers 4.42 functional 2024-06-27 17:05:29 -07:00
oobabooga 8ec8bc0b85 UI: handle another edge case while streaming lists 2024-06-26 18:40:43 -07:00
oobabooga 0e138e4be1 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2024-06-26 18:30:08 -07:00
mefich a85749dcbe
Update models_settings.py: add default alpha_value, add proper compress_pos_emb for newer GGUFs (#6111) 2024-06-26 22:17:56 -03:00
oobabooga 5fe532a5ce UI: remove DRY info text
It was visible for loaders without DRY.
2024-06-26 15:33:11 -07:00
oobabooga b1187fc9a5 UI: prevent flickering while streaming lists / bullet points 2024-06-25 19:19:45 -07:00
oobabooga 3691451d00
Add back the "Rename chat" feature (#6161) 2024-06-25 22:28:58 -03:00
oobabooga ac3f92d36a UI: store chat history in the browser 2024-06-25 14:18:07 -07:00
oobabooga 46ca15cb79 Minor bug fixes after e7e1f5901e 2024-06-25 11:49:33 -07:00
oobabooga 83534798b2 UI: move "Character" dropdown to the main Chat tab 2024-06-25 11:25:57 -07:00
oobabooga 279cba607f UI: don't show an animation when updating the "past chats" menu 2024-06-25 11:10:17 -07:00
oobabooga 3290edfad9 Bug fix: force chat history to be loaded on launch 2024-06-25 11:06:05 -07:00
oobabooga e7e1f5901e
Prompts in the "past chats" menu (#6160) 2024-06-25 15:01:43 -03:00
oobabooga a43c210617
Improved past chats menu (#6158) 2024-06-25 00:07:22 -03:00
oobabooga 96ba53d916 Handle another fix after 57119c1b30 2024-06-24 15:51:12 -07:00
oobabooga 577a8cd3ee
Add TensorRT-LLM support (#5715) 2024-06-24 02:30:03 -03:00
oobabooga 536f8d58d4 Do not expose alpha_value to llama.cpp & rope_freq_base to transformers
To avoid confusion
2024-06-23 22:09:24 -07:00
oobabooga b48ab482f8 Remove obsolete "gptq_for_llama_info" message 2024-06-23 22:05:19 -07:00
oobabooga 5e8dc56f8a Fix after previous commit 2024-06-23 21:58:28 -07:00
Louis Del Valle 57119c1b30
Update block_requests.py to resolve unexpected type error (500 error) (#5976) 2024-06-24 01:56:51 -03:00
CharlesCNorton 5993904acf
Fix several typos in the codebase (#6151) 2024-06-22 21:40:25 -03:00
GodEmperor785 2c5a9eb597
Change limits of RoPE scaling sliders in UI (#6142) 2024-06-19 21:42:17 -03:00
Guanghua Lu 229d89ccfb
Make logs more readable, no more \u7f16\u7801 (#6127) 2024-06-15 23:00:13 -03:00
Forkoz 1576227f16
Fix GGUFs with no BOS token present, mainly qwen2 models. (#6119)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-06-14 13:51:01 -03:00
oobabooga 10601850d9 Fix after previous commit 2024-06-13 19:54:12 -07:00
oobabooga 0f3a423de1 Alternative solution to "get next logits" deadlock (#6106) 2024-06-13 19:34:16 -07:00
oobabooga 386500aa37 Avoid unnecessary calls UI -> backend, to make it faster 2024-06-12 20:52:42 -07:00
Forkoz 1d79aa67cf
Fix flash-attn UI parameter to actually store true. (#6076) 2024-06-13 00:34:54 -03:00
Belladore 3abafee696
DRY sampler improvements (#6053) 2024-06-12 23:39:11 -03:00
oobabooga a36fa73071 Lint 2024-06-12 19:00:21 -07:00
oobabooga 2d196ed2fe Remove obsolete pre_layer parameter 2024-06-12 18:56:44 -07:00
Belladore 46174a2d33
Fix error when bos_token_id is None. (#6061) 2024-06-12 22:52:27 -03:00
Belladore a363cdfca1
Fix missing bos token for some models (including Llama-3) (#6050) 2024-05-27 09:21:30 -03:00
oobabooga 8df68b05e9 Remove MinPLogitsWarper (it's now a transformers built-in) 2024-05-27 05:03:30 -07:00
oobabooga 4f1e96b9e3 Downloader: Add --model-dir argument, respect --model-dir in the UI 2024-05-23 20:42:46 -07:00
oobabooga ad54d524f7 Revert "Fix stopping strings for llama-3 and phi (#6043)"
This reverts commit 5499bc9bc8.
2024-05-22 17:18:08 -07:00
oobabooga 5499bc9bc8
Fix stopping strings for llama-3 and phi (#6043) 2024-05-22 13:53:59 -03:00
oobabooga 9e189947d1 Minor fix after bd7cc4234d (thanks @belladoreai) 2024-05-21 10:37:30 -07:00
oobabooga ae86292159 Fix getting Phi-3-small-128k-instruct logits 2024-05-21 10:35:00 -07:00
oobabooga bd7cc4234d
Backend cleanup (#6025) 2024-05-21 13:32:02 -03:00
Philipp Emanuel Weidmann 852c943769
DRY: A modern repetition penalty that reliably prevents looping (#5677) 2024-05-19 23:53:47 -03:00
oobabooga 9f77ed1b98
--idle-timeout flag to unload the model if unused for N minutes (#6026) 2024-05-19 23:29:39 -03:00
altoiddealer 818b4e0354
Let grammar escape backslashes (#5865) 2024-05-19 20:26:09 -03:00
Tisjwlf 907702c204
Fix gguf multipart file loading (#5857) 2024-05-19 20:22:09 -03:00
A0nameless0man 5cb59707f3
fix: grammar not support utf-8 (#5900) 2024-05-19 20:10:39 -03:00
Samuel Wein b63dc4e325
UI: Warn user if they are trying to load a model from no path (#6006) 2024-05-19 20:05:17 -03:00
chr 6b546a2c8b
llama.cpp: increase the max threads from 32 to 256 (#5889) 2024-05-19 20:02:19 -03:00
oobabooga a38a37b3b3 llama.cpp: default n_gpu_layers to the maximum value for the model automatically 2024-05-19 10:57:42 -07:00
oobabooga a4611232b7 Make --verbose output less spammy 2024-05-18 09:57:00 -07:00
oobabooga e9c9483171 Improve the logging messages while loading models 2024-05-03 08:10:44 -07:00
oobabooga e61055253c Bump llama-cpp-python to 0.2.69, add --flash-attn option 2024-05-03 04:31:22 -07:00
oobabooga 51fb766bea
Add back my llama-cpp-python wheels, bump to 0.2.65 (#5964) 2024-04-30 09:11:31 -03:00
oobabooga dfdb6fee22 Set llm_int8_enable_fp32_cpu_offload=True for --load-in-4bit
To allow for 32-bit CPU offloading (it's very slow).
2024-04-26 09:39:27 -07:00
oobabooga 70845c76fb
Add back the max_updates_second parameter (#5937) 2024-04-26 10:14:51 -03:00
oobabooga 6761b5e7c6
Improved instruct style (with syntax highlighting & LaTeX rendering) (#5936) 2024-04-26 10:13:11 -03:00
oobabooga 4094813f8d Lint 2024-04-24 09:53:41 -07:00
oobabooga 64e2a9a0a7 Fix the Phi-3 template when used in the UI 2024-04-24 01:34:11 -07:00
oobabooga f0538efb99 Remove obsolete --tensorcores references 2024-04-24 00:31:28 -07:00
Colin f3c9103e04
Revert walrus operator for params['max_memory'] (#5878) 2024-04-24 01:09:14 -03:00
oobabooga 9b623b8a78
Bump llama-cpp-python to 0.2.64, use official wheels (#5921) 2024-04-23 23:17:05 -03:00
oobabooga f27e1ba302
Add a /v1/internal/chat-prompt endpoint (#5879) 2024-04-19 00:24:46 -03:00
oobabooga e158299fb4 Fix loading sharted GGUF models through llamacpp_HF 2024-04-11 14:50:05 -07:00
wangshuai09 fd4e46bce2
Add Ascend NPU support (basic) (#5541) 2024-04-11 18:42:20 -03:00
Ashley Kleynhans 70c637bf90
Fix saving of UI defaults to settings.yaml - Fixes #5592 (#5794) 2024-04-11 18:19:16 -03:00
oobabooga 3e3a7c4250 Bump llama-cpp-python to 0.2.61 & fix the crash 2024-04-11 14:15:34 -07:00
Victorivus c423d51a83
Fix issue #5783 for character images with transparency (#5827) 2024-04-11 02:23:43 -03:00
Alex O'Connell b94cd6754e
UI: Respect model and lora directory settings when downloading files (#5842) 2024-04-11 01:55:02 -03:00
oobabooga 17c4319e2d Fix loading command-r context length metadata 2024-04-10 21:39:59 -07:00
oobabooga cbd65ba767
Add a simple min_p preset, make it the default (#5836) 2024-04-09 12:50:16 -03:00
oobabooga d02744282b Minor logging change 2024-04-06 18:56:58 -07:00
oobabooga dd6e4ac55f Prevent double <BOS_TOKEN> with Command R+ 2024-04-06 13:14:32 -07:00
oobabooga 1bdceea2d4 UI: Focus on the chat input after starting a new chat 2024-04-06 12:57:57 -07:00
oobabooga 168a0f4f67 UI: do not load the "gallery" extension by default 2024-04-06 12:43:21 -07:00
oobabooga 64a76856bd Metadata: Fix loading Command R+ template with multiple options 2024-04-06 07:32:17 -07:00
oobabooga 1b87844928 Minor fix 2024-04-05 18:43:43 -07:00
oobabooga 6b7f7555fc Logging message to make transformers loader a bit more transparent 2024-04-05 18:40:02 -07:00
oobabooga 0f536dd97d UI: Fix the "Show controls" action 2024-04-05 12:18:33 -07:00
oobabooga 308452b783 Bitsandbytes: load preconverted 4bit models without additional flags 2024-04-04 18:10:24 -07:00
oobabooga d423021a48
Remove CTransformers support (#5807) 2024-04-04 20:23:58 -03:00
oobabooga 13fe38eb27 Remove specialized code for gpt-4chan 2024-04-04 16:11:47 -07:00
oobabooga 9ab7365b56 Read rope_theta for DBRX model (thanks turboderp) 2024-04-01 20:25:31 -07:00
oobabooga db5f6cd1d8 Fix ExLlamaV2 loaders using unnecessary "bits" metadata 2024-03-30 21:51:39 -07:00
oobabooga 624faa1438 Fix ExLlamaV2 context length setting (closes #5750) 2024-03-30 21:33:16 -07:00
oobabooga 9653a9176c Minor improvements to Parameters tab 2024-03-29 10:41:24 -07:00
oobabooga 35da6b989d
Organize the parameters tab (#5767) 2024-03-28 16:45:03 -03:00
Yiximail 8c9aca239a
Fix prompt incorrectly set to empty when suffix is empty string (#5757) 2024-03-26 16:33:09 -03:00
oobabooga 2a92a842ce
Bump gradio to 4.23 (#5758) 2024-03-26 16:32:20 -03:00
oobabooga 49b111e2dd Lint 2024-03-17 08:33:23 -07:00
oobabooga d890c99b53 Fix StreamingLLM when content is removed from the beginning of the prompt 2024-03-14 09:18:54 -07:00
oobabooga d828844a6f Small fix: don't save truncation_length to settings.yaml
It should derive from model metadata or from a command-line flag.
2024-03-14 08:56:28 -07:00
oobabooga 2ef5490a36 UI: make light theme less blinding 2024-03-13 08:23:16 -07:00
oobabooga 40a60e0297 Convert attention_sink_size to int (closes #5696) 2024-03-13 08:15:49 -07:00
oobabooga edec3bf3b0 UI: avoid caching convert_to_markdown calls during streaming 2024-03-13 08:14:34 -07:00
oobabooga 8152152dd6 Small fix after 28076928ac 2024-03-11 19:56:35 -07:00
oobabooga 28076928ac
UI: Add a new "User description" field for user personality/biography (#5691) 2024-03-11 23:41:57 -03:00
oobabooga 63701f59cf UI: mention that n_gpu_layers > 0 is necessary for the GPU to be used 2024-03-11 18:54:15 -07:00
oobabooga 46031407b5 Increase the cache size of convert_to_markdown to 4096 2024-03-11 18:43:04 -07:00
oobabooga 9eca197409 Minor logging change 2024-03-11 16:31:13 -07:00
oobabooga afadc787d7 Optimize the UI by caching convert_to_markdown calls 2024-03-10 20:10:07 -07:00
oobabooga 056717923f Document StreamingLLM 2024-03-10 19:15:23 -07:00
oobabooga 15d90d9bd5 Minor logging change 2024-03-10 18:20:50 -07:00
oobabooga cf0697936a Optimize StreamingLLM by over 10x 2024-03-08 21:48:28 -08:00
oobabooga afb51bd5d6
Add StreamingLLM for llamacpp & llamacpp_HF (2nd attempt) (#5669) 2024-03-09 00:25:33 -03:00
oobabooga 549bb88975 Increase height of "Custom stopping strings" UI field 2024-03-08 12:54:30 -08:00
oobabooga 238f69accc Move "Command for chat-instruct mode" to the main chat tab (closes #5634) 2024-03-08 12:52:52 -08:00
oobabooga bae14c8f13 Right-truncate long chat completion prompts instead of left-truncating
Instructions are usually at the beginning of the prompt.
2024-03-07 08:50:24 -08:00
Bartowski 104573f7d4
Update cache_4bit documentation (#5649)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-03-07 13:08:21 -03:00
oobabooga 2ec1d96c91
Add cache_4bit option for ExLlamaV2 (#5645) 2024-03-06 23:02:25 -03:00
oobabooga 2174958362
Revert gradio to 3.50.2 (#5640) 2024-03-06 11:52:46 -03:00
oobabooga d61e31e182
Save the extensions after Gradio 4 (#5632) 2024-03-05 07:54:34 -03:00
oobabooga 63a1d4afc8
Bump gradio to 4.19 (#5522) 2024-03-05 07:32:28 -03:00
oobabooga f697cb4609 Move update_wizard_windows.sh to update_wizard_windows.bat (oops) 2024-03-04 19:26:24 -08:00
kalomaze cfb25c9b3f
Cubic sampling w/ curve param (#5551)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-03-03 13:22:21 -03:00
oobabooga 09b13acfb2 Perplexity evaluation: print to terminal after calculation is finished 2024-02-28 19:58:21 -08:00
oobabooga 4164e29416 Block the "To create a public link, set share=True" gradio message 2024-02-25 15:06:08 -08:00
oobabooga d34126255d Fix loading extensions with "-" in the name (closes #5557) 2024-02-25 09:24:52 -08:00
oobabooga 10aedc329f Logging: more readable messages when renaming chat histories 2024-02-22 07:57:06 -08:00
oobabooga faf3bf2503 Perplexity evaluation: make UI events more robust (attempt) 2024-02-22 07:13:22 -08:00
oobabooga ac5a7a26ea Perplexity evaluation: add some informative error messages 2024-02-21 20:20:52 -08:00
oobabooga 59032140b5 Fix CFG with llamacpp_HF (2nd attempt) 2024-02-19 18:35:42 -08:00
oobabooga c203c57c18 Fix CFG with llamacpp_HF 2024-02-19 18:09:49 -08:00
oobabooga ae05d9830f Replace {{char}}, {{user}} in the chat template itself 2024-02-18 19:57:54 -08:00
oobabooga 1f27bef71b
Move chat UI elements to the right on desktop (#5538) 2024-02-18 14:32:05 -03:00
oobabooga d6bd71db7f ExLlamaV2: fix loading when autosplit is not set 2024-02-17 12:54:37 -08:00
oobabooga af0bbf5b13 Lint 2024-02-17 09:01:04 -08:00