Commit graph

1516 commits

Author SHA1 Message Date
oobabooga 725639118a UI: Use a tab length of 2 for lists (rather than 4) 2025-01-01 13:53:50 -08:00
oobabooga 7b88724711
Make responses start faster by removing unnecessary cleanup calls (#6625) 2025-01-01 18:33:38 -03:00
oobabooga 64853f8509 Reapply a necessary change that I removed from #6599 (thanks @mamei16!) 2024-12-31 14:43:22 -08:00
mamei16 e953af85cd
Fix newlines in the markdown renderer (#6599)
---------

Co-authored-by: oobabooga <oobabooga4@gmail.com>
2024-12-31 01:04:02 -03:00
oobabooga 39a5c9a49c
UI organization (#6618) 2024-12-29 11:16:17 -03:00
oobabooga 0490ee620a UI: increase the threshold for a <li> to be considered long (some more) 2024-12-19 16:51:34 -08:00
oobabooga 89888bef56 UI: increase the threshold for a <li> to be considered long 2024-12-19 14:38:36 -08:00
oobabooga 2acec386fc UI: improve the streaming cursor 2024-12-19 14:08:56 -08:00
oobabooga e2fb86e5df UI: further improve the style of lists and headings 2024-12-19 13:59:24 -08:00
oobabooga c48e4622e8 UI: update a link 2024-12-18 06:28:14 -08:00
oobabooga b27f6f8915 Lint 2024-12-17 20:13:32 -08:00
oobabooga b051e2c161 UI: improve a margin for readability 2024-12-17 19:58:21 -08:00
oobabooga 60c93e0c66 UI: Set cache_type to fp16 by default 2024-12-17 19:44:20 -08:00
oobabooga ddccc0d657 UI: minor change to log messages 2024-12-17 19:39:00 -08:00
oobabooga 3030c79e8c UI: show progress while loading a model 2024-12-17 19:37:43 -08:00
Diner Burger addad3c63e
Allow more granular KV cache settings (#6561) 2024-12-17 17:43:48 -03:00
oobabooga c43ee5db11 UI: very minor color change 2024-12-17 07:59:55 -08:00
oobabooga d769618591
Improved UI (#6575) 2024-12-17 00:47:41 -03:00
oobabooga 350758f81c UI: Fix the history upload event 2024-11-19 20:34:53 -08:00
oobabooga d01293861b Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2024-11-18 10:15:36 -08:00
oobabooga 3d19746a5d UI: improve HTML rendering for lists with sub-lists 2024-11-18 10:14:09 -08:00
mefich 1c937dad72
Filter whitespaces in downloader fields in model tab (#6518) 2024-11-18 12:01:40 -03:00
PIRI e1061ba7e3
Make token bans work again on HF loaders (#6488) 2024-10-24 15:24:02 -03:00
oobabooga 2468cfd8bb Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2024-10-14 13:25:27 -07:00
oobabooga bb62e796eb Fix locally compiled llama-cpp-python failing to import 2024-10-14 13:24:13 -07:00
oobabooga c9a9f63d1b Fix llama.cpp loader not being random (thanks @reydeljuego12345) 2024-10-14 13:07:07 -07:00
PIRI 03a2e70054
Fix temperature_last when temperature not in sampler priority (#6439) 2024-10-09 11:25:14 -03:00
oobabooga 49dfa0adaf Fix the "save preset" event 2024-10-01 11:20:48 -07:00
oobabooga 93c250b9b6 Add a UI element for enable_tp 2024-10-01 11:16:15 -07:00
oobabooga cca9d6e22d Lint 2024-10-01 10:21:06 -07:00
oobabooga 4d9ce586d3 Update llama_cpp_python_hijack.py, fix llamacpp_hf 2024-09-30 14:49:21 -07:00
oobabooga bbdeed3cf4 Make sampler priority high if unspecified 2024-09-29 20:45:27 -07:00
Manuel Schmid 0f90a1b50f
Do not set value for histories in chat when --multi-user is used (#6317) 2024-09-29 01:08:55 -03:00
oobabooga c61b29b9ce Simplify the warning when flash-attn fails to import 2024-09-28 20:33:17 -07:00
oobabooga b92d7fd43e Add warnings for when AutoGPTQ, TensorRT-LLM, or HQQ are missing 2024-09-28 20:30:24 -07:00
oobabooga 7276dca933 Fix a typo 2024-09-27 20:28:17 -07:00
RandoInternetPreson 46996f6519
ExllamaV2 tensor parallelism to increase multi gpu inference speeds (#6356) 2024-09-28 00:26:03 -03:00
Philipp Emanuel Weidmann 301375834e
Exclude Top Choices (XTC): A sampler that boosts creativity, breaks writing clichés, and inhibits non-verbatim repetition (#6335) 2024-09-27 22:50:12 -03:00
oobabooga 5c918c5b2d Make it possible to sort DRY 2024-09-27 15:40:48 -07:00
oobabooga 7424f789bf
Fix the sampling monkey patch (and add more options to sampler_priority) (#6411) 2024-09-27 19:03:25 -03:00
oobabooga bba5b36d33 Don't import PEFT unless necessary 2024-09-03 19:40:53 -07:00
oobabooga c5b40eb555 llama.cpp: prevent prompt evaluation progress bar with just 1 step 2024-09-03 17:37:06 -07:00
GralchemOz 4c74c7a116
Fix UnicodeDecodeError for BPE-based Models (especially GLM-4) (#6357) 2024-09-02 23:00:59 -03:00
oobabooga fd9cb26619 UI: update the DRY parameters descriptions/order 2024-08-19 19:40:17 -07:00
oobabooga e926c03b3d Add a --tokenizer-dir command-line flag for llamacpp_HF 2024-08-06 19:41:18 -07:00
oobabooga 30b4d8c8b2 Fix Llama 3.1 template including lengthy "tools" headers 2024-07-29 11:52:17 -07:00
oobabooga 9dcff21da9 Remove unnecessary shared.previous_model_name variable 2024-07-28 18:35:11 -07:00
oobabooga 514fb2e451 Fix UI error caused by --idle-timeout 2024-07-28 18:30:06 -07:00
oobabooga 5223c009fe Minor change after previous commit 2024-07-27 23:13:34 -07:00
oobabooga 7050bb880e UI: make n_ctx/max_seq_len/truncation_length numbers rather than sliders 2024-07-27 23:11:53 -07:00
Harry 078e8c8969
Make compress_pos_emb float (#6276) 2024-07-28 03:03:19 -03:00
oobabooga ffc713f72b UI: fix multiline LaTeX equations 2024-07-27 15:36:10 -07:00
oobabooga 493f8c3242 UI: remove animation after clicking on "Stop" in the Chat tab 2024-07-27 15:22:34 -07:00
oobabooga e4d411b841 UI: fix rendering LaTeX enclosed between \[ and \] 2024-07-27 15:21:44 -07:00
oobabooga f32d26240d UI: Fix the chat "stop" event 2024-07-26 23:03:05 -07:00
oobabooga b80d5906c2 UI: fix saving characters 2024-07-25 15:09:31 -07:00
oobabooga 42e80108f5 UI: clear the markdown LRU cache when using the default/notebook tabs 2024-07-25 08:01:42 -07:00
oobabooga 7e2851e505 UI: fix "Command for chat-instruct mode" not appearing by default 2024-07-24 15:04:12 -07:00
oobabooga 947016d010 UI: make the markdown LRU cache infinite (for really long conversations) 2024-07-24 11:54:26 -07:00
oobabooga e637b702ff UI: make text between quotes colored in chat mode 2024-07-23 21:30:32 -07:00
oobabooga 1815877061 UI: fix the default character not loading correctly on startup 2024-07-23 18:48:10 -07:00
oobabooga e6181e834a Remove AutoAWQ as a standalone loader
(it works better through transformers)
2024-07-23 15:31:17 -07:00
oobabooga f18c947a86 Update the tensorcores description 2024-07-22 18:06:41 -07:00
oobabooga aa809e420e Bump llama-cpp-python to 0.2.83, add back tensorcore wheels
Also add back the progress bar patch
2024-07-22 18:05:11 -07:00
oobabooga 11bbf71aa5
Bump back llama-cpp-python (#6257) 2024-07-22 16:19:41 -03:00
oobabooga 0f53a736c1 Revert the llama-cpp-python update 2024-07-22 12:02:25 -07:00
oobabooga a687f950ba Remove the tensorcores llama.cpp wheels
They are not faster than the default wheels anymore and they use a lot of space.
2024-07-22 11:54:35 -07:00
oobabooga 017d2332ea Remove no longer necessary llama-cpp-python patch 2024-07-22 11:50:36 -07:00
oobabooga f2d802e707 UI: make Default/Notebook contents persist on page reload 2024-07-22 11:07:10 -07:00
oobabooga 8768b69a2d Lint 2024-07-21 22:08:14 -07:00
oobabooga 79e8dbe45f UI: minor optimization 2024-07-21 22:06:49 -07:00
oobabooga 7ef2414357 UI: Make the file saving dialogs more robust 2024-07-21 15:38:20 -07:00
oobabooga 423372d6e7 Organize ui_file_saving.py 2024-07-21 13:23:18 -07:00
oobabooga 17df2d7bdf UI: don't export the instruction template on "Save UI defaults to settings.yaml" 2024-07-21 10:45:01 -07:00
oobabooga d05846eae5 UI: refresh the pfp cache on handle_your_picture_change 2024-07-21 10:17:22 -07:00
oobabooga e9d4bff7d0 Update the --tensor_split description 2024-07-20 22:04:48 -07:00
oobabooga 916d1d8283 UI: improve the style of code blocks in light theme 2024-07-20 20:32:57 -07:00
oobabooga 564d8c8c0d Make alpha_value a float number 2024-07-20 20:02:54 -07:00
oobabooga 79c4d3da3d
Optimize the UI (#6251) 2024-07-21 00:01:42 -03:00
Alberto Cano a14c510afb
Customize the subpath for gradio, use with reverse proxy (#5106) 2024-07-20 19:10:39 -03:00
Vhallo a9a6d72d8c
Use gr.Number for RoPE scaling parameters (#6233)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-07-20 18:57:09 -03:00
oobabooga aa7c14a463 Use chat-instruct mode by default 2024-07-19 21:43:52 -07:00
InvectorGator 4148a9201f
Fix for MacOS users encountering model load errors (#6227)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: Invectorgator <Kudzu12gaming@outlook.com>
2024-07-13 00:04:19 -03:00
oobabooga e436d69e2b Add --no_xformers and --no_sdpa flags for ExllamaV2 2024-07-11 15:47:37 -07:00
oobabooga 512b311137 Improve the llama-cpp-python exception messages 2024-07-11 13:00:29 -07:00
oobabooga f957b17d18 UI: update an obsolete message 2024-07-10 06:01:36 -07:00
oobabooga c176244327 UI: Move cache_8bit/cache_4bit further up 2024-07-05 12:16:21 -07:00
oobabooga aa653e3b5a Prevent llama.cpp from being monkey patched more than once (closes #6201) 2024-07-05 03:34:15 -07:00
oobabooga a210e61df1 UI: Fix broken chat histories not showing (closes #6196) 2024-07-04 20:31:25 -07:00
oobabooga e79e7b90dc UI: Move the cache_8bit and cache_4bit elements up 2024-07-04 20:21:28 -07:00
oobabooga 8b44d7b12a Lint 2024-07-04 20:16:44 -07:00
oobabooga a47de06088 Force only 1 llama-cpp-python version at a time for now 2024-07-04 19:43:34 -07:00
oobabooga f243b4ca9c Make llama-cpp-python not crash immediately 2024-07-04 19:16:00 -07:00
oobabooga 907137a13d Automatically set bf16 & use_eager_attention for Gemma-2 2024-07-01 21:46:35 -07:00
GralchemOz 8a39f579d8
transformers: Add eager attention option to make Gemma-2 work properly (#6188) 2024-07-01 12:08:08 -03:00
oobabooga ed01322763 Obtain the EOT token from the jinja template (attempt)
To use as a stopping string.
2024-06-30 15:09:22 -07:00
oobabooga 4ea260098f llama.cpp: add 4-bit/8-bit kv cache options 2024-06-29 09:10:33 -07:00
oobabooga 220c1797fc UI: do not show the "save character" button in the Chat tab 2024-06-28 22:11:31 -07:00
oobabooga 8803ae1845 UI: decrease the number of lines for "Command for chat-instruct mode" 2024-06-28 21:41:30 -07:00
oobabooga 5c6b9c610d
UI: allow the character dropdown to coexist in the Chat tab and the Parameters tab (#6177) 2024-06-29 01:20:27 -03:00