Commit graph

281 commits

Author SHA1 Message Date
oobabooga 507db0929d
Do not use empty user messages in chat mode
This allows the bot to send messages by clicking on Generate with empty inputs.
2023-03-24 17:22:22 -03:00
oobabooga 6e1b16c2aa
Update html_generator.py 2023-03-24 17:18:27 -03:00
oobabooga ffb0187e83
Update chat.py 2023-03-24 17:17:29 -03:00
oobabooga bfe960731f
Merge branch 'main' into fix/api-reload 2023-03-24 16:54:41 -03:00
oobabooga 8fad84abc2
Update extensions.py 2023-03-24 16:51:27 -03:00
oobabooga 4f5c2ce785
Fix chat_generation_attempts 2023-03-24 02:03:30 -03:00
oobabooga 8747c74339
Another missing import 2023-03-23 22:19:01 -03:00
oobabooga 7078d168c3
Missing import 2023-03-23 22:16:08 -03:00
oobabooga d1327f99f9
Fix broken callbacks.py 2023-03-23 22:12:24 -03:00
oobabooga b0abb327d8
Update LoRA.py 2023-03-23 22:02:09 -03:00
oobabooga bf22d16ebc
Clear cache while switching LoRAs 2023-03-23 21:56:26 -03:00
oobabooga 4578e88ffd
Stop the bot from talking for you in chat mode 2023-03-23 21:38:20 -03:00
oobabooga 9bf6ecf9e2
Fix LoRA device map (attempt) 2023-03-23 16:49:41 -03:00
oobabooga c5ebcc5f7e
Change the default names (#518)
* Update shared.py

* Update settings-template.json
2023-03-23 13:36:00 -03:00
oobabooga 29bd41d453
Fix LoRA in CPU mode 2023-03-23 01:05:13 -03:00
oobabooga eac27f4f55
Make LoRAs work in 16-bit mode 2023-03-23 00:55:33 -03:00
oobabooga bfa81e105e
Fix FlexGen streaming 2023-03-23 00:22:14 -03:00
oobabooga de6a09dc7f
Properly separate the original prompt from the reply 2023-03-23 00:12:40 -03:00
wywywywy 61346b88ea
Add "seed" menu in the Parameters tab 2023-03-22 15:40:20 -03:00
oobabooga 45b7e53565
Only catch proper Exceptions in the text generation function 2023-03-20 20:36:02 -03:00
oobabooga db4219a340
Update comments 2023-03-20 16:40:08 -03:00
oobabooga 7618f3fe8c
Add -gptq-preload for 4-bit offloading (#460)
This works in a 4GB card now:

```
python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20
```
2023-03-20 16:30:56 -03:00
Vladimir Belitskiy e96687b1d6 Do not send empty user input as part of the prompt.
However, if extensions modify the empty prompt to be non-empty,
it'l still work as before.
2023-03-20 14:27:39 -04:00
oobabooga 9a3bed50c3
Attempt at fixing 4-bit with CPU offload 2023-03-20 15:11:56 -03:00
Vladimir Belitskiy ca47e016b4
Do not display empty user messages in chat mode.
There doesn't seem to be much value to them - they just take up space while also making it seem like there's still some sort of pseudo-dialogue going on, instead of a monologue by the bot.
2023-03-20 12:55:57 -04:00
oobabooga 75a7a84ef2
Exception handling (#454)
* Update text_generation.py
* Update extensions.py
2023-03-20 13:36:52 -03:00
oobabooga ddb62470e9 --no-cache and --gpu-memory in MiB for fine VRAM control 2023-03-19 19:21:41 -03:00
oobabooga a78b6508fc Make custom LoRAs work by default #385 2023-03-19 12:11:35 -03:00
Maya acdbd6b708 Check if app should display extensions ui 2023-03-19 13:31:21 +00:00
Maya 81c9d130f2 Fix global 2023-03-19 13:25:49 +00:00
Maya 099d7a844b Add setup method to extensions 2023-03-19 13:22:24 +00:00
oobabooga c753261338 Disable stop_at_newline by default 2023-03-18 10:55:57 -03:00
oobabooga 7c945cfe8e Don't include PeftModel every time 2023-03-18 10:55:24 -03:00
oobabooga e26763a510 Minor changes 2023-03-17 22:56:46 -03:00
Wojtek Kowaluk 7994b580d5 clean up duplicated code 2023-03-18 02:27:26 +01:00
Wojtek Kowaluk 30939e2aee add mps support on apple silicon 2023-03-18 00:56:23 +01:00
oobabooga 9256e937d6 Add some LoRA params 2023-03-17 17:45:28 -03:00
oobabooga 9ed2c4501c Use markdown in the "HTML" tab 2023-03-17 16:06:11 -03:00
oobabooga f0b26451b4 Add a comment 2023-03-17 13:07:17 -03:00
oobabooga 3bda907727
Merge pull request #366 from oobabooga/lora
Add LoRA support
2023-03-17 11:48:48 -03:00
oobabooga 614dad0075 Remove unused import 2023-03-17 11:43:11 -03:00
oobabooga a717fd709d Sort the imports 2023-03-17 11:42:25 -03:00
oobabooga 29fe7b1c74 Remove LoRA tab, move it into the Parameters menu 2023-03-17 11:39:48 -03:00
oobabooga 214dc6868e Several QoL changes related to LoRA 2023-03-17 11:24:52 -03:00
askmyteapot 53b6a66beb
Update GPTQ_Loader.py
Correcting decoder layer for renamed class.
2023-03-17 18:34:13 +10:00
oobabooga 0cecfc684c Add files 2023-03-16 21:35:53 -03:00
oobabooga 104293f411 Add LoRA support 2023-03-16 21:31:39 -03:00
oobabooga ee164d1821 Don't split the layers in 8-bit mode by default 2023-03-16 18:22:16 -03:00
oobabooga e085cb4333 Small changes 2023-03-16 13:34:23 -03:00
awoo 83cb20aad8 Add support for --gpu-memory witn --load-in-8bit 2023-03-16 18:42:53 +03:00