Commit graph

389 commits

Author SHA1 Message Date
oobabooga 3da633a497
Merge pull request #529 from EyeDeck/main
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-24 23:51:01 -03:00
catalpaaa b37c54edcf lora-dir, model-dir and login auth
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
oobabooga 9fa47c0eed
Revert GPTQ_loader.py (accident) 2023-03-24 19:57:12 -03:00
oobabooga a6bf54739c
Revert models.py (accident) 2023-03-24 19:56:45 -03:00
oobabooga 0a16224451
Update GPTQ_loader.py 2023-03-24 19:54:36 -03:00
oobabooga a80aa65986
Update models.py 2023-03-24 19:53:20 -03:00
oobabooga 507db0929d
Do not use empty user messages in chat mode
This allows the bot to send messages by clicking on Generate with empty inputs.
2023-03-24 17:22:22 -03:00
oobabooga 6e1b16c2aa
Update html_generator.py 2023-03-24 17:18:27 -03:00
oobabooga ffb0187e83
Update chat.py 2023-03-24 17:17:29 -03:00
oobabooga bfe960731f
Merge branch 'main' into fix/api-reload 2023-03-24 16:54:41 -03:00
oobabooga 8fad84abc2
Update extensions.py 2023-03-24 16:51:27 -03:00
Forkoz b740c5b284
Add display of context when input was generated
Not sure if I did this right but it does move with the conversation and seems to match value.
2023-03-24 08:56:07 -05:00
oobabooga 4f5c2ce785
Fix chat_generation_attempts 2023-03-24 02:03:30 -03:00
EyeDeck dcfd866402 Allow loading of .safetensors through GPTQ-for-LLaMa 2023-03-23 21:31:34 -04:00
oobabooga 8747c74339
Another missing import 2023-03-23 22:19:01 -03:00
oobabooga 7078d168c3
Missing import 2023-03-23 22:16:08 -03:00
oobabooga d1327f99f9
Fix broken callbacks.py 2023-03-23 22:12:24 -03:00
oobabooga b0abb327d8
Update LoRA.py 2023-03-23 22:02:09 -03:00
oobabooga bf22d16ebc
Clear cache while switching LoRAs 2023-03-23 21:56:26 -03:00
oobabooga 4578e88ffd
Stop the bot from talking for you in chat mode 2023-03-23 21:38:20 -03:00
oobabooga 9bf6ecf9e2
Fix LoRA device map (attempt) 2023-03-23 16:49:41 -03:00
oobabooga c5ebcc5f7e
Change the default names (#518)
* Update shared.py

* Update settings-template.json
2023-03-23 13:36:00 -03:00
oobabooga 29bd41d453
Fix LoRA in CPU mode 2023-03-23 01:05:13 -03:00
oobabooga eac27f4f55
Make LoRAs work in 16-bit mode 2023-03-23 00:55:33 -03:00
oobabooga bfa81e105e
Fix FlexGen streaming 2023-03-23 00:22:14 -03:00
oobabooga de6a09dc7f
Properly separate the original prompt from the reply 2023-03-23 00:12:40 -03:00
wywywywy 61346b88ea
Add "seed" menu in the Parameters tab 2023-03-22 15:40:20 -03:00
oobabooga 45b7e53565
Only catch proper Exceptions in the text generation function 2023-03-20 20:36:02 -03:00
oobabooga db4219a340
Update comments 2023-03-20 16:40:08 -03:00
oobabooga 7618f3fe8c
Add -gptq-preload for 4-bit offloading (#460)
This works in a 4GB card now:

```
python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20
```
2023-03-20 16:30:56 -03:00
Vladimir Belitskiy e96687b1d6 Do not send empty user input as part of the prompt.
However, if extensions modify the empty prompt to be non-empty,
it'l still work as before.
2023-03-20 14:27:39 -04:00
oobabooga 9a3bed50c3
Attempt at fixing 4-bit with CPU offload 2023-03-20 15:11:56 -03:00
Vladimir Belitskiy ca47e016b4
Do not display empty user messages in chat mode.
There doesn't seem to be much value to them - they just take up space while also making it seem like there's still some sort of pseudo-dialogue going on, instead of a monologue by the bot.
2023-03-20 12:55:57 -04:00
oobabooga 75a7a84ef2
Exception handling (#454)
* Update text_generation.py
* Update extensions.py
2023-03-20 13:36:52 -03:00
oobabooga ddb62470e9 --no-cache and --gpu-memory in MiB for fine VRAM control 2023-03-19 19:21:41 -03:00
oobabooga a78b6508fc Make custom LoRAs work by default #385 2023-03-19 12:11:35 -03:00
Maya acdbd6b708 Check if app should display extensions ui 2023-03-19 13:31:21 +00:00
Maya 81c9d130f2 Fix global 2023-03-19 13:25:49 +00:00
Maya 099d7a844b Add setup method to extensions 2023-03-19 13:22:24 +00:00
oobabooga c753261338 Disable stop_at_newline by default 2023-03-18 10:55:57 -03:00
oobabooga 7c945cfe8e Don't include PeftModel every time 2023-03-18 10:55:24 -03:00
oobabooga e26763a510 Minor changes 2023-03-17 22:56:46 -03:00
Wojtek Kowaluk 7994b580d5 clean up duplicated code 2023-03-18 02:27:26 +01:00
Wojtek Kowaluk 30939e2aee add mps support on apple silicon 2023-03-18 00:56:23 +01:00
oobabooga 9256e937d6 Add some LoRA params 2023-03-17 17:45:28 -03:00
oobabooga 9ed2c4501c Use markdown in the "HTML" tab 2023-03-17 16:06:11 -03:00
oobabooga f0b26451b4 Add a comment 2023-03-17 13:07:17 -03:00
oobabooga 3bda907727
Merge pull request #366 from oobabooga/lora
Add LoRA support
2023-03-17 11:48:48 -03:00
oobabooga 614dad0075 Remove unused import 2023-03-17 11:43:11 -03:00
oobabooga a717fd709d Sort the imports 2023-03-17 11:42:25 -03:00
oobabooga 29fe7b1c74 Remove LoRA tab, move it into the Parameters menu 2023-03-17 11:39:48 -03:00
oobabooga 214dc6868e Several QoL changes related to LoRA 2023-03-17 11:24:52 -03:00
askmyteapot 53b6a66beb
Update GPTQ_Loader.py
Correcting decoder layer for renamed class.
2023-03-17 18:34:13 +10:00
oobabooga 0cecfc684c Add files 2023-03-16 21:35:53 -03:00
oobabooga 104293f411 Add LoRA support 2023-03-16 21:31:39 -03:00
oobabooga ee164d1821 Don't split the layers in 8-bit mode by default 2023-03-16 18:22:16 -03:00
oobabooga e085cb4333 Small changes 2023-03-16 13:34:23 -03:00
awoo 83cb20aad8 Add support for --gpu-memory witn --load-in-8bit 2023-03-16 18:42:53 +03:00
oobabooga 1c378965e1 Remove unused imports 2023-03-16 10:18:34 -03:00
oobabooga a577fb1077 Keep GALACTICA special tokens (#300) 2023-03-16 00:46:59 -03:00
oobabooga 4d64a57092 Add Interface mode tab 2023-03-15 23:29:56 -03:00
oobabooga 66256ac1dd Make the "no GPU has been detected" message more descriptive 2023-03-15 19:31:27 -03:00
oobabooga c1959c26ee Show/hide the extensions block using javascript 2023-03-15 16:35:28 -03:00
oobabooga 348596f634 Fix broken extensions 2023-03-15 15:11:16 -03:00
oobabooga c5f14fb9b8 Optimize the HTML generation speed 2023-03-15 14:19:28 -03:00
oobabooga bf812c4893 Minor fix 2023-03-15 14:05:35 -03:00
oobabooga 05ee323ce5 Rename a file 2023-03-15 13:26:32 -03:00
oobabooga d30a14087f Further reorganize the UI 2023-03-15 13:24:54 -03:00
oobabooga cf2da86352 Prevent *Is typing* from disappearing instantly while streaming 2023-03-15 12:51:13 -03:00
oobabooga ec972b85d1 Move all css/js into separate files 2023-03-15 12:35:11 -03:00
oobabooga 693b53d957 Merge branch 'main' into HideLord-main 2023-03-15 12:08:56 -03:00
oobabooga 1413931705 Add a header bar and redesign the interface (#293) 2023-03-15 12:01:32 -03:00
oobabooga 9d6a625bd6 Add 'hallucinations' filter #326
This breaks the API since a new parameter has been added.
It should be a one-line fix. See api-example.py.
2023-03-15 11:10:35 -03:00
oobabooga afc5339510
Remove "eval" statements from text generation functions 2023-03-14 16:04:17 -03:00
oobabooga 265ba384b7 Rename a file, add deprecation warning for --load-in-4bit 2023-03-14 07:56:31 -03:00
oobabooga 3da73e409f Merge branch 'main' into Zerogoki00-opt4-bit 2023-03-14 07:50:36 -03:00
oobabooga 3fb8196e16 Implement "*Is recording a voice message...*" for TTS #303 2023-03-13 22:28:00 -03:00
oobabooga 518e5c4244 Some minor fixes to the GPTQ loader 2023-03-13 16:45:08 -03:00
Ayanami Rei 8778b756e6 use updated load_quantized 2023-03-13 22:11:40 +03:00
Ayanami Rei a6a6522b6a determine model type from model name 2023-03-13 22:11:32 +03:00
Ayanami Rei b6c5c57f2e remove default value from argument 2023-03-13 22:11:08 +03:00
Alexander Hristov Hristov 63c5a139a2
Merge branch 'main' into main 2023-03-13 19:50:08 +02:00
Ayanami Rei e1c952c41c make argument non case-sensitive 2023-03-13 20:22:38 +03:00
Ayanami Rei 3c9afd5ca3 rename method 2023-03-13 20:14:40 +03:00
Ayanami Rei 1b99ed61bc add argument --gptq-model-type and remove duplicate arguments 2023-03-13 20:01:34 +03:00
Ayanami Rei edbc61139f use new quant loader 2023-03-13 20:00:38 +03:00
Ayanami Rei 345b6dee8c refactor quant models loader and add support of OPT 2023-03-13 19:59:57 +03:00
oobabooga 66b6971b61 Update README 2023-03-13 12:44:18 -03:00
oobabooga ddea518e0f Document --auto-launch 2023-03-13 12:43:33 -03:00
oobabooga 372363bc3d Fix GPTQ load_quant call on Windows 2023-03-13 12:07:02 -03:00
oobabooga 0c224cf4f4 Fix GALACTICA (#285) 2023-03-13 10:32:28 -03:00
oobabooga 2c4699a7e9 Change a comment 2023-03-13 00:20:02 -03:00
oobabooga 0a7acb3bd9 Remove redundant comments 2023-03-13 00:12:21 -03:00
oobabooga 77294b27dd Use str(Path) instead of os.path.abspath(Path) 2023-03-13 00:08:01 -03:00
oobabooga b9e0712b92 Fix Open Assistant 2023-03-12 23:58:25 -03:00
oobabooga 1ddcd4d0ba Clean up silero_tts
This should only be used with --no-stream.

The shared.still_streaming implementation was faulty by design:
output_modifier should never be called when streaming is already over.
2023-03-12 23:42:49 -03:00
HideLord 683556f411 Adding markdown support and slight refactoring. 2023-03-12 21:34:09 +02:00
oobabooga cebe8b390d Remove useless "substring_found" variable 2023-03-12 15:50:38 -03:00
oobabooga 4bcd675ccd Add *Is typing...* to regenerate as well 2023-03-12 15:23:33 -03:00
oobabooga c7aa51faa6 Use a list of eos_tokens instead of just a number
This might be the cause of LLaMA ramblings that some people have experienced.
2023-03-12 14:54:58 -03:00
oobabooga d8bea766d7
Merge pull request #192 from xanthousm/main
Add text generation stream status to shared module, use for better TTS with auto-play
2023-03-12 13:40:16 -03:00
oobabooga fda376d9c3 Use os.path.abspath() instead of str() 2023-03-12 12:41:04 -03:00
HideLord 8403152257 Fixing compatibility with GPTQ repo commit 2f667f7da051967566a5fb0546f8614bcd3a1ccd. Expects string and breaks on 2023-03-12 17:28:15 +02:00
oobabooga f3b00dd165
Merge pull request #224 from ItsLogic/llama-bits
Allow users to load 2, 3 and 4 bit llama models
2023-03-12 11:23:50 -03:00
oobabooga 65dda28c9d Rename --llama-bits to --gptq-bits 2023-03-12 11:19:07 -03:00
oobabooga fed3617f07 Move LLaMA 4-bit into a separate file 2023-03-12 11:12:34 -03:00
oobabooga 0ac562bdba Add a default prompt for OpenAssistant oasst-sft-1-pythia-12b #253 2023-03-12 10:46:16 -03:00
oobabooga 78901d522b Remove unused imports 2023-03-12 08:59:05 -03:00
Xan b3e10e47c0 Fix merge conflict in text_generation
- Need to update `shared.still_streaming = False` before the final `yield formatted_outputs`, shifted the position of some yields.
2023-03-12 18:56:35 +11:00
oobabooga ad14f0e499 Fix regenerate (provisory way) 2023-03-12 03:42:29 -03:00
oobabooga 6e12068ba2
Merge pull request #258 from lxe/lxe/utf8
Load and save character files and chat history in UTF-8
2023-03-12 03:28:49 -03:00
oobabooga e2da6b9685 Fix You You You appearing in chat mode 2023-03-12 03:25:56 -03:00
oobabooga bcf0075278
Merge pull request #235 from xanthousm/Quality_of_life-main
--auto-launch and "Is typing..."
2023-03-12 03:12:56 -03:00
Aleksey Smolenchuk 3f7c3d6559
No need to set encoding on binary read 2023-03-11 22:10:57 -08:00
oobabooga 341e135036 Various fixes in chat mode 2023-03-12 02:53:08 -03:00
Aleksey Smolenchuk 3baf5fc700
Load and save chat history in utf-8 2023-03-11 21:40:01 -08:00
oobabooga b0e8cb8c88 Various fixes in chat mode 2023-03-12 02:31:45 -03:00
unknown 433f6350bc Load and save character files in UTF-8 2023-03-11 21:23:05 -08:00
oobabooga 0bd5430988 Use 'with' statement to better handle streaming memory 2023-03-12 02:04:28 -03:00
oobabooga 37f0166b2d Fix memory leak in new streaming (second attempt) 2023-03-11 23:14:49 -03:00
oobabooga 92fe947721 Merge branch 'main' into new-streaming 2023-03-11 19:59:45 -03:00
oobabooga 2743dd736a Add *Is typing...* to impersonate as well 2023-03-11 10:50:18 -03:00
Xan 96c51973f9 --auto-launch and "Is typing..."
- Added `--auto-launch` arg to open web UI in the default browser when ready.
- Changed chat.py to display user input immediately and "*Is typing...*" as a temporary reply while generating text. Most noticeable when using `--no-stream`.
2023-03-11 22:50:59 +11:00
Xan 33df4bd91f Merge remote-tracking branch 'upstream/main' 2023-03-11 22:40:47 +11:00
draff 28fd4fc970 Change wording to be consistent with other args 2023-03-10 23:34:13 +00:00
draff 001e638b47 Make it actually work 2023-03-10 23:28:19 +00:00
draff 804486214b Re-implement --load-in-4bit and update --llama-bits arg description 2023-03-10 23:21:01 +00:00
ItsLogic 9ba8156a70
remove unnecessary Path() 2023-03-10 22:33:58 +00:00
draff e6c631aea4 Replace --load-in-4bit with --llama-bits
Replaces --load-in-4bit with a more flexible --llama-bits arg to allow for 2 and 3 bit models as well. This commit also fixes a loading issue with .pt files which are not in the root of the models folder
2023-03-10 21:36:45 +00:00
oobabooga 026d60bd34 Remove default preset that didn't do anything 2023-03-10 14:01:02 -03:00
oobabooga e9dbdafb14
Merge branch 'main' into pt-path-changes 2023-03-10 11:03:42 -03:00
oobabooga 706a03b2cb Minor changes 2023-03-10 11:02:25 -03:00
oobabooga de7dd8b6aa Add comments 2023-03-10 10:54:08 -03:00
oobabooga e461c0b7a0 Move the import to the top 2023-03-10 10:51:12 -03:00
deepdiffuser 9fbd60bf22 add no_split_module_classes to prevent tensor split error 2023-03-10 05:30:47 -08:00
deepdiffuser ab47044459 add multi-gpu support for 4bit gptq LLaMA 2023-03-10 04:52:45 -08:00
rohvani 2ac2913747 fix reference issue 2023-03-09 20:13:23 -08:00
rohvani 826e297b0e add llama-65b-4bit support & multiple pt paths 2023-03-09 18:31:32 -08:00
oobabooga 9849aac0f1 Don't show .pt models in the list 2023-03-09 21:54:50 -03:00
oobabooga 74102d5ee4 Insert to the path instead of appending 2023-03-09 20:51:22 -03:00
oobabooga 2965aa1625 Check if the .pt file exists 2023-03-09 20:48:51 -03:00
oobabooga 828a524f9a Add LLaMA 4-bit support 2023-03-09 15:50:26 -03:00
oobabooga 59b5f7a4b7 Improve usage of stopping_criteria 2023-03-08 12:13:40 -03:00
oobabooga add9330e5e Bug fixes 2023-03-08 11:26:29 -03:00
Xan 5648a41a27 Merge branch 'main' of https://github.com/xanthousm/text-generation-webui 2023-03-08 22:08:54 +11:00
Xan ad6b699503 Better TTS with autoplay
- Adds "still_streaming" to shared module for extensions to know if generation is complete
- Changed TTS extension with new options:
   - Show text under the audio widget
   - Automatically play the audio once text generation finishes
   - manage the generated wav files (only keep files for finished generations, optional max file limit)
   - [wip] ability to change voice pitch and speed
- added 'tensorboard' to requirements, since python sent "tensorboard not found" errors after a fresh installation.
2023-03-08 22:02:17 +11:00
oobabooga 33fb6aed74 Minor bug fix 2023-03-08 03:08:16 -03:00
oobabooga ad2970374a Readability improvements 2023-03-08 03:00:06 -03:00
oobabooga 72d539dbff Better separate the FlexGen case 2023-03-08 02:54:47 -03:00
oobabooga 0e16c0bacb Remove redeclaration of a function 2023-03-08 02:50:49 -03:00