Commit graph

1924 commits

Author SHA1 Message Date
oobabooga bc58dc40bd Fix a minor bug 2023-06-06 12:57:13 -03:00
oobabooga f06a1387f0 Reorganize Models tab 2023-06-06 07:58:07 -03:00
oobabooga d49d299b67 Change a message 2023-06-06 07:54:56 -03:00
oobabooga f9b8bed953 Remove folder 2023-06-06 07:49:12 -03:00
oobabooga 90fdb8edc6 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2023-06-06 07:46:51 -03:00
oobabooga 7ed1e35fbf Reorganize Parameters tab in chat mode 2023-06-06 07:46:25 -03:00
oobabooga 00b94847da Remove softprompt support 2023-06-06 07:42:23 -03:00
bobzilla 643c44e975
Add ngrok shared URL ingress support (#1944) 2023-06-06 07:34:20 -03:00
oobabooga ccb4c9f178 Add some padding to chat box 2023-06-06 07:21:16 -03:00
oobabooga 0aebc838a0 Don't save the history for 'None' character 2023-06-06 07:21:07 -03:00
oobabooga 9f215523e2 Remove some unused imports 2023-06-06 07:05:46 -03:00
oobabooga b9bc9665d9 Remove some extra space 2023-06-06 07:01:37 -03:00
oobabooga 177ab7912a Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2023-06-06 07:01:00 -03:00
oobabooga 0f0108ce34 Never load the history for default character 2023-06-06 07:00:11 -03:00
oobabooga ae25b21d61 Improve instruct style in dark mode 2023-06-06 07:00:00 -03:00
matatonic 4a17a5db67
[extensions/openai] various fixes (#2533) 2023-06-06 01:43:04 -03:00
dependabot[bot] 97f3fa843f
Bump llama-cpp-python from 0.1.56 to 0.1.57 (#2537) 2023-06-05 23:45:58 -03:00
oobabooga 11f38b5c2b Add AutoGPTQ LoRA support 2023-06-05 23:32:57 -03:00
oobabooga 3a5cfe96f0 Increase chat_prompt_size_max 2023-06-05 17:37:37 -03:00
oobabooga 4e9937aa99 Bump gradio 2023-06-05 17:29:21 -03:00
pandego 0377e385e0
Update .gitignore (#2504)
add .idea to git ignore
2023-06-05 17:11:03 -03:00
oobabooga eda224c92d Update README 2023-06-05 17:04:09 -03:00
oobabooga bef94b9ebb Update README 2023-06-05 17:01:13 -03:00
oobabooga 99d701994a Update GPTQ-models-(4-bit-mode).md 2023-06-05 15:55:00 -03:00
oobabooga f276d88546 Use AutoGPTQ by default for GPTQ models 2023-06-05 15:41:48 -03:00
oobabooga 632571a009 Update README 2023-06-05 15:16:06 -03:00
oobabooga 6a75bda419 Assign some 4096 seq lengths 2023-06-05 12:07:52 -03:00
oobabooga 9b0e95abeb Fix "regenerate" when "Start reply with" is set 2023-06-05 11:56:03 -03:00
oobabooga e61316ce0b Detect airoboros and Nous-Hermes 2023-06-05 11:52:13 -03:00
oobabooga 19f78684e6 Add "Start reply with" feature to chat mode 2023-06-02 13:58:08 -03:00
GralchemOz f7b07c4705
Fix the missing Chinese character bug (#2497) 2023-06-02 13:45:41 -03:00
oobabooga 28198bc15c Change some headers 2023-06-02 11:28:43 -03:00
oobabooga 5177cdf634 Change AutoGPTQ info 2023-06-02 11:19:44 -03:00
oobabooga 8e98633efd Add a description for chat_prompt_size 2023-06-02 11:13:22 -03:00
oobabooga 5a8162a46d Reorganize models tab 2023-06-02 02:24:15 -03:00
oobabooga d183c7d29e Fix streaming japanese/chinese characters
Credits to matasonic for the idea
2023-06-02 02:09:52 -03:00
jllllll 5216117a63
Fix MacOS incompatibility in requirements.txt (#2485) 2023-06-02 01:46:16 -03:00
oobabooga 2f6631195a Add desc_act checkbox to the UI 2023-06-02 01:45:46 -03:00
LaaZa 9c066601f5
Extend AutoGPTQ support for any GPTQ model (#1668) 2023-06-02 01:33:55 -03:00
oobabooga b4ad060c1f Use cuda 11.7 instead of 11.8 2023-06-02 01:04:44 -03:00
oobabooga d0aca83b53 Add AutoGPTQ wheels to requirements.txt 2023-06-02 00:47:11 -03:00
oobabooga f344ccdddb Add a template for bluemoon 2023-06-01 14:42:12 -03:00
oobabooga aa83fc21d4
Update Low-VRAM-guide.md 2023-06-01 12:14:27 -03:00
oobabooga ee99a87330
Update README.md 2023-06-01 12:08:44 -03:00
oobabooga a83f9aa65b
Update shared.py 2023-06-01 12:08:39 -03:00
oobabooga 146505a16b
Update README.md 2023-06-01 12:04:58 -03:00
oobabooga 756e3afbcc
Update llama.cpp-models.md 2023-06-01 12:04:31 -03:00
oobabooga 3347395944
Update README.md 2023-06-01 12:01:20 -03:00
oobabooga 74bf2f05b1
Update llama.cpp-models.md 2023-06-01 11:58:33 -03:00
oobabooga 90dc8a91ae
Update llama.cpp-models.md 2023-06-01 11:57:57 -03:00