Commit graph

695 commits

Author SHA1 Message Date
oobabooga 08c622df2e Autodetect rms_norm_eps and n_gqa for llama-2-70b 2023-07-24 15:27:34 -07:00
oobabooga a07d070b6c
Add llama-2-70b GGML support (#3285) 2023-07-24 16:37:03 -03:00
jllllll d7a14174a2
Remove auto-loading when only one model is available (#3187) 2023-07-18 11:39:08 -03:00
oobabooga f83fdb9270 Don't reset LoRA menu when loading a model 2023-07-17 12:50:25 -07:00
oobabooga 2de0cedce3 Fix reload screen color 2023-07-15 22:39:39 -07:00
oobabooga 27a84b4e04 Make AutoGPTQ the default again
Purely for compatibility with more models.
You should still use ExLlama_HF for LLaMA models.
2023-07-15 22:29:23 -07:00
oobabooga 5e3f7e00a9
Create llamacpp_HF loader (#3062) 2023-07-16 02:21:13 -03:00
Panchovix 7c4d4fc7d3
Increase alpha value limit for NTK RoPE scaling for exllama/exllama_HF (#3149) 2023-07-16 01:56:04 -03:00
oobabooga b284f2407d Make ExLlama_HF the new default for GPTQ 2023-07-14 14:03:56 -07:00
oobabooga 22341e948d Merge branch 'main' into dev 2023-07-12 14:19:49 -07:00
oobabooga 0e6295886d Fix lora download folder 2023-07-12 14:19:33 -07:00
oobabooga eb823fce96 Fix typo 2023-07-12 13:55:19 -07:00
oobabooga d0a626f32f Change reload screen color 2023-07-12 13:54:43 -07:00
oobabooga c592a9b740 Fix #3117 2023-07-12 13:33:44 -07:00
Gabriel Pena eedb3bf023
Add low vram mode on llama cpp (#3076) 2023-07-12 11:05:13 -03:00
Axiom Wolf d986c17c52
Chat history download creates more detailed file names (#3051) 2023-07-12 00:10:36 -03:00
Salvador E. Tropea 324e45b848
[Fixed] wbits and groupsize values from model not shown (#2977) 2023-07-11 23:27:38 -03:00
oobabooga bfafd07f44 Change a message 2023-07-11 18:29:20 -07:00
micsthepick 3708de2b1f
respect model dir for downloads (#3077) (#3079) 2023-07-11 18:55:46 -03:00
oobabooga 9aee1064a3 Block a cloudfare request 2023-07-06 22:24:52 -07:00
oobabooga 40c5722499
Fix #2998 2023-07-04 11:35:25 -03:00
oobabooga 55457549cd Add information about presets to the UI 2023-07-03 22:39:01 -07:00
Panchovix 10c8c197bf
Add Support for Static NTK RoPE scaling for exllama/exllama_hf (#2955) 2023-07-04 01:13:16 -03:00
FartyPants eb6112d5a2
Update server.py - clear LORA after reload (#2952) 2023-07-04 00:13:38 -03:00
oobabooga 4b1804a438
Implement sessions + add basic multi-user support (#2991) 2023-07-04 00:03:30 -03:00
missionfloyd ac0f96e785
Some more character import tweaks. (#2921) 2023-06-29 14:56:25 -03:00
oobabooga 5d2a8b31be Improve Parameters tab UI 2023-06-29 14:33:47 -03:00
oobabooga 3443219cbc
Add repetition penalty range parameter to transformers (#2916) 2023-06-29 13:40:13 -03:00
oobabooga 22d455b072 Add LoRA support to ExLlama_HF 2023-06-26 00:10:33 -03:00
oobabooga b7c627f9a0 Set UI defaults 2023-06-25 22:55:43 -03:00
oobabooga c52290de50
ExLlama with long context (#2875) 2023-06-25 22:49:26 -03:00
oobabooga f0fcd1f697 Sort some imports 2023-06-25 01:44:36 -03:00
oobabooga e6e5f546b8 Reorganize Chat settings tab 2023-06-25 01:10:20 -03:00
jllllll bef67af23c
Use pre-compiled python module for ExLlama (#2770) 2023-06-24 20:24:17 -03:00
missionfloyd 51a388fa34
Organize chat history/character import menu (#2845)
* Organize character import menu

* Move Chat history upload/download labels
2023-06-24 09:55:02 -03:00
oobabooga 3ae9af01aa Add --no_use_cuda_fp16 param for AutoGPTQ 2023-06-23 12:22:56 -03:00
LarryVRH 580c1ee748
Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. (#2777) 2023-06-21 15:31:42 -03:00
Morgan Schweers 447569e31a
Add a download progress bar to the web UI. (#2472)
* Show download progress on the model screen.

* In case of error, mark as done to clear progress bar.

* Increase the iteration block size to reduce overhead.
2023-06-20 22:59:14 -03:00
oobabooga 09c781b16f Add modules/block_requests.py
This has become unnecessary, but it could be useful in the future
for other libraries.
2023-06-18 16:31:14 -03:00
oobabooga 44f28830d1 Chat CSS: fix ul, li, pre styles + remove redefinitions 2023-06-18 15:20:51 -03:00
oobabooga 239b11c94b Minor bug fixes 2023-06-17 17:57:56 -03:00
oobabooga 1e400218e9 Fix a typo 2023-06-16 21:01:57 -03:00
oobabooga 5f392122fd Add gpu_split param to ExLlama
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
2023-06-16 20:49:36 -03:00
oobabooga 83be8eacf0 Minor fix 2023-06-16 20:38:32 -03:00
oobabooga 9f40032d32
Add ExLlama support (#2444) 2023-06-16 20:35:38 -03:00
oobabooga dea43685b0 Add some clarifications 2023-06-16 19:10:53 -03:00
oobabooga 7ef6a50e84
Reorganize model loading UI completely (#2720) 2023-06-16 19:00:37 -03:00
Tom Jobbins 646b0c889f
AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP (#2648) 2023-06-15 23:59:54 -03:00
oobabooga 474dc7355a Allow API requests to use parameter presets 2023-06-14 11:32:20 -03:00
FartyPants 9f150aedc3
A small UI change in Models menu (#2640) 2023-06-12 01:24:44 -03:00
oobabooga da5d9a28d8 Fix tabbed extensions showing up at the bottom of the UI 2023-06-11 21:20:51 -03:00
oobabooga ae5e2b3470 Reorganize a bit 2023-06-11 19:50:20 -03:00
oobabooga f4defde752 Add a menu for installing extensions 2023-06-11 17:11:06 -03:00
oobabooga 8e73806b20 Improve "Interface mode" appearance 2023-06-11 15:29:45 -03:00
oobabooga ac122832f7 Make dropdown menus more similar to automatic1111 2023-06-11 14:20:16 -03:00
oobabooga 6133675e0f
Add menus for saving presets/characters/instruction templates/prompts (#2621) 2023-06-11 12:19:18 -03:00
brandonj60 b04e18d10c
Add Mirostat v2 sampling to transformer models (#2571) 2023-06-09 21:26:31 -03:00
oobabooga eb2601a8c3 Reorganize Parameters tab 2023-06-06 14:51:02 -03:00
oobabooga f06a1387f0 Reorganize Models tab 2023-06-06 07:58:07 -03:00
oobabooga d49d299b67 Change a message 2023-06-06 07:54:56 -03:00
oobabooga 7ed1e35fbf Reorganize Parameters tab in chat mode 2023-06-06 07:46:25 -03:00
oobabooga 00b94847da Remove softprompt support 2023-06-06 07:42:23 -03:00
oobabooga f276d88546 Use AutoGPTQ by default for GPTQ models 2023-06-05 15:41:48 -03:00
oobabooga 6a75bda419 Assign some 4096 seq lengths 2023-06-05 12:07:52 -03:00
oobabooga 19f78684e6 Add "Start reply with" feature to chat mode 2023-06-02 13:58:08 -03:00
oobabooga 28198bc15c Change some headers 2023-06-02 11:28:43 -03:00
oobabooga 5177cdf634 Change AutoGPTQ info 2023-06-02 11:19:44 -03:00
oobabooga 8e98633efd Add a description for chat_prompt_size 2023-06-02 11:13:22 -03:00
oobabooga 5a8162a46d Reorganize models tab 2023-06-02 02:24:15 -03:00
oobabooga 2f6631195a Add desc_act checkbox to the UI 2023-06-02 01:45:46 -03:00
Morgan Schweers 1aed2b9e52
Make it possible to download protected HF models from the command line. (#2408) 2023-06-01 00:11:21 -03:00
oobabooga 486ddd62df Add tfs and top_a to the API examples 2023-05-31 23:44:38 -03:00
oobabooga 3209440b7c
Rearrange chat buttons 2023-05-30 00:17:31 -03:00
Luis Lopez 9e7204bef4
Add tail-free and top-a sampling (#2357) 2023-05-29 21:40:01 -03:00
oobabooga 1394f44e14 Add triton checkbox for AutoGPTQ 2023-05-29 15:32:45 -03:00
Honkware 204731952a
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
oobabooga f27135bdd3 Add Eta Sampling preset
Also remove some presets that I do not consider relevant
2023-05-28 22:44:35 -03:00
oobabooga 00ebea0b2a Use YAML for presets and settings 2023-05-28 22:34:12 -03:00
oobabooga fc33216477 Small fix for n_ctx in llama.cpp 2023-05-25 13:55:51 -03:00
oobabooga 37d4ad012b Add a button for rendering markdown for any model 2023-05-25 11:59:27 -03:00
DGdev91 cf088566f8
Make llama.cpp read prompt size and seed from settings (#2299) 2023-05-25 10:29:31 -03:00
oobabooga 361451ba60
Add --load-in-4bit parameter (#2320) 2023-05-25 01:14:13 -03:00
Gabriel Terrien fc116711b0
FIX save_model_settings function to also update shared.model_config (#2282) 2023-05-24 10:01:07 -03:00
flurb18 d37a28730d
Beginning of multi-user support (#2262)
Adds a lock to generate_reply
2023-05-24 09:38:20 -03:00
Gabriel Terrien 7aed53559a
Support of the --gradio-auth flag (#2283) 2023-05-23 20:39:26 -03:00
oobabooga 8b9ba3d7b4 Fix a typo 2023-05-22 20:13:03 -03:00
Gabriel Terrien 0f51b64bb3
Add a "dark_theme" option to settings.json (#2288) 2023-05-22 19:45:11 -03:00
oobabooga c5446ae0e2 Fix a link 2023-05-22 19:38:34 -03:00
oobabooga c0fd7f3257
Add mirostat parameters for llama.cpp (#2287) 2023-05-22 19:37:24 -03:00
oobabooga ec7437f00a
Better way to toggle light/dark mode 2023-05-22 03:19:01 -03:00
oobabooga d46f5a58a3 Add a button for toggling dark/light mode 2023-05-22 03:11:44 -03:00
oobabooga 753f6c5250 Attempt at making interface restart more robust 2023-05-22 00:26:07 -03:00
oobabooga 30225b9dd0 Fix --no-stream queue bug 2023-05-22 00:02:59 -03:00
oobabooga 288912baf1 Add a description for the extensions checkbox group 2023-05-21 23:33:37 -03:00
oobabooga 6e77844733 Add a description for penalty_alpha 2023-05-21 23:09:30 -03:00
oobabooga e3d578502a Improve "Chat settings" tab appearance a bit 2023-05-21 22:58:14 -03:00
oobabooga e116d31180 Prevent unwanted log messages from modules 2023-05-21 22:42:34 -03:00
oobabooga d7fabe693d Reorganize parameters tab 2023-05-21 16:24:47 -03:00
oobabooga 8ac3636966
Add epsilon_cutoff/eta_cutoff parameters (#2258) 2023-05-21 15:11:57 -03:00
Matthew McAllister ab6acddcc5
Add Save/Delete character buttons (#1870)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-20 21:48:45 -03:00
HappyWorldGames a3e9769e31
Added an audible notification after text generation in web. (#1277)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-19 23:16:06 -03:00
oobabooga f052ab9c8f Fix setting pre_layer from within the ui 2023-05-17 23:17:44 -03:00
oobabooga fd743a0207 Small change 2023-05-17 02:34:29 -03:00
LoopLooter aeb1b7a9c5
feature to save prompts with custom names (#1583)
---------

Co-authored-by: LoopLooter <looplooter>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-17 02:30:45 -03:00
oobabooga 85f74961f9 Update "Interface mode" tab 2023-05-17 01:57:51 -03:00
oobabooga ce21804ec7 Allow extensions to define a new tab 2023-05-17 01:31:56 -03:00
oobabooga a84f499718 Allow extensions to define custom CSS and JS 2023-05-17 00:30:54 -03:00
oobabooga 824fa8fc0e Attempt at making interface restart more robust 2023-05-16 22:27:43 -03:00
oobabooga 7584d46c29
Refactor models.py (#2113) 2023-05-16 19:52:22 -03:00
oobabooga 5cd6dd4287 Fix no-mmap bug 2023-05-16 17:35:49 -03:00
oobabooga 89e37626ab Reorganize chat settings tab 2023-05-16 17:22:59 -03:00
Jakub Strnad 0227e738ed
Add settings UI for llama.cpp and fixed reloading of llama.cpp models (#2087) 2023-05-15 19:51:23 -03:00
oobabooga 3b886f9c9f
Add chat-instruct mode (#2049) 2023-05-14 10:43:55 -03:00
oobabooga 437d1c7ead Fix bug in save_model_settings 2023-05-12 14:33:00 -03:00
oobabooga 146a9cb393 Allow superbooga to download URLs in parallel 2023-05-12 14:19:55 -03:00
oobabooga e283ddc559 Change how spaces are handled in continue/generation attempts 2023-05-12 12:50:29 -03:00
oobabooga 5eaa914e1b Fix settings.json being ignored because of config.yaml 2023-05-12 06:09:45 -03:00
oobabooga a77965e801 Make the regex for "Save settings for this model" exact 2023-05-12 00:43:13 -03:00
oobabooga f7dbddfff5 Add a variable for tts extensions to use 2023-05-11 16:12:46 -03:00
oobabooga 638c6a65a2
Refactor chat functions (#2003) 2023-05-11 15:37:04 -03:00
oobabooga e5b1547849 Fix reload model button 2023-05-10 14:44:25 -03:00
oobabooga 3316e33d14 Remove unused code 2023-05-10 11:59:59 -03:00
oobabooga cd36b8f739 Remove space 2023-05-10 01:41:33 -03:00
oobabooga bdf1274b5d Remove duplicate code 2023-05-10 01:34:04 -03:00
oobabooga 3913155c1f
Style improvements (#1957) 2023-05-09 22:49:39 -03:00
Wojtab e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741) 2023-05-09 20:18:02 -03:00
oobabooga 13e7ebfc77 Change a comment 2023-05-09 15:56:32 -03:00
LaaZa 218bd64bd1
Add the option to not automatically load the selected model (#1762)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-09 15:52:35 -03:00
Kamil Szurant 641500dcb9
Use current input for Impersonate (continue impersonate feature) (#1147) 2023-05-09 02:37:42 -03:00
oobabooga b5260b24f1
Add support for custom chat styles (#1917) 2023-05-08 12:35:03 -03:00
Matthew McAllister 0c048252b5
Fix character menu when default chat mode is 'instruct' (#1873) 2023-05-07 23:50:38 -03:00
oobabooga 56a5969658
Improve the separation between instruct/chat modes (#1896) 2023-05-07 23:47:02 -03:00
oobabooga 56f6b7052a Sort dropdowns numerically 2023-05-05 23:14:56 -03:00
oobabooga 8aafb1f796
Refactor text_generation.py, add support for custom generation functions (#1817) 2023-05-05 18:53:03 -03:00
Tom Jobbins 876fbb97c0
Allow downloading model from HF branch via UI (#1662)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-05 13:59:01 -03:00
oobabooga 95d04d6a8d Better warning messages 2023-05-03 21:43:17 -03:00
Tom Jobbins 3c67fc0362
Allow groupsize 1024, needed for larger models eg 30B to lower VRAM usage (#1660) 2023-05-02 00:46:26 -03:00
oobabooga a777c058af
Precise prompts for instruct mode 2023-04-26 03:21:53 -03:00
oobabooga f39c99fa14 Load more than one LoRA with --lora, fix a bug 2023-04-25 22:58:48 -03:00
oobabooga b6af2e56a2 Add --character flag, add character to settings.json 2023-04-24 13:19:42 -03:00
oobabooga caaa556159 Move extensions block definition to the bottom 2023-04-24 03:30:35 -03:00
oobabooga b1ee674d75 Make interface state (mostly) persistent on page reload 2023-04-24 03:05:47 -03:00
oobabooga 47809e28aa Minor changes 2023-04-24 01:04:48 -03:00
Andy Salerno 654933c634
New universal API with streaming/blocking endpoints (#990)
Previous title: Add api_streaming extension and update api-example-stream to use it

* Merge with latest main

* Add parameter capturing encoder_repetition_penalty

* Change some defaults, minor fixes

* Add --api, --public-api flags

* remove unneeded/broken comment from blocking API startup. The comment is already correctly emitted in try_start_cloudflared by calling the lambda we pass in.

* Update on_start message for blocking_api, it should say 'non-streaming' and not 'streaming'

* Update the API examples

* Change a comment

* Update README

* Remove the gradio API

* Remove unused import

* Minor change

* Remove unused import

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-23 15:52:43 -03:00
oobabooga 2dca8bb25e Sort imports 2023-04-21 17:20:59 -03:00
oobabooga c238ba9532 Add a 'Count tokens' button 2023-04-21 17:18:34 -03:00
oobabooga 2d766d2e19 Improve notebook mode button sizes 2023-04-21 02:37:58 -03:00
oobabooga b4af319fa2 Add a workaround for GALACTICA on some systems 2023-04-19 01:43:10 -03:00
oobabooga 61126f4674 Change the button styles 2023-04-19 00:56:24 -03:00
oobabooga 649e4017a5 Style improvements 2023-04-19 00:36:28 -03:00