Commit graph

1590 commits

Author SHA1 Message Date
oobabooga 60be76f0fc Revert gradio bump (gallery is broken) 2023-05-03 11:53:30 -03:00
Thireus ☠ 4883e20fa7
Fix openai extension script.py - TypeError: '_Environ' object is not callable (#1753) 2023-05-03 09:51:49 -03:00
oobabooga f54256e348 Rename no_mmap to no-mmap 2023-05-03 09:50:31 -03:00
oobabooga 875da16b7b Minor CSS improvements in chat mode 2023-05-02 23:38:51 -03:00
practicaldreamer e3968f7dd0
Fix Training Pad Token (#1678)
Currently padding with 0 the character vs 0 the token id (<unk> in the case of llama)
2023-05-02 23:16:08 -03:00
Wojtab 80c2f25131
LLaVA: small fixes (#1664)
* change multimodal projector to the correct one

* remove reference to custom stopping strings from readme

* fix stopping strings if tokenizer extension adds/removes tokens

* add API example

* LLaVA 7B just dropped, add to readme that there is no support for it currently
2023-05-02 23:12:22 -03:00
oobabooga c31b0f15a7 Remove some spaces 2023-05-02 23:07:07 -03:00
oobabooga 320fcfde4e Style/pep8 improvements 2023-05-02 23:05:38 -03:00
oobabooga ecd79caa68
Update Extensions.md 2023-05-02 22:52:32 -03:00
matatonic 7ac41b87df
add openai compatible api (#1475) 2023-05-02 22:49:53 -03:00
oobabooga 4e09df4034 Only show extension in UI if it has an ui() function 2023-05-02 19:20:02 -03:00
oobabooga d016c38640 Bump gradio version 2023-05-02 19:19:33 -03:00
oobabooga 88cdf6ed3d Prevent websocket from disconnecting 2023-05-02 19:03:19 -03:00
Ahmed Said fbcd32988e
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative (#1649)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-02 18:25:28 -03:00
Carl Kenner 2f1a2846d1
Verbose should always print special tokens in input (#1707) 2023-05-02 01:24:56 -03:00
Alex "mcmonkey" Goodwin 0df0b2d0f9
optimize stopping strings processing (#1625) 2023-05-02 01:21:54 -03:00
oobabooga e6a78c00f2
Update Docker.md 2023-05-02 00:51:10 -03:00
Tom Jobbins 3c67fc0362
Allow groupsize 1024, needed for larger models eg 30B to lower VRAM usage (#1660) 2023-05-02 00:46:26 -03:00
Lawrence M Stewart 78bd4d3a5c
Update LLaMA-model.md (#1700)
protobuf needs to be 3.20.x or lower
2023-05-02 00:44:09 -03:00
Dhaladom f659415170
fixed variable name "context" to "prompt" (#1716) 2023-05-02 00:43:40 -03:00
dependabot[bot] 280c2f285f
Bump safetensors from 0.3.0 to 0.3.1 (#1720) 2023-05-02 00:42:39 -03:00
oobabooga 56b13d5d48 Bump llama-cpp-python version 2023-05-02 00:41:54 -03:00
Lőrinc Pap ee68ec9079
Update folder produced by download-model (#1601) 2023-04-27 12:03:02 -03:00
oobabooga 91745f63c3 Use Vicuna-v0 by default for Vicuna models 2023-04-26 17:45:38 -03:00
oobabooga 93e5c066ae Update RWKV Raven template 2023-04-26 17:31:03 -03:00
oobabooga c83210c460 Move the rstrips 2023-04-26 17:17:22 -03:00
oobabooga 1d8b8222e9 Revert #1579, apply the proper fix
Apparently models dislike trailing spaces.
2023-04-26 16:47:50 -03:00
TiagoGF a941c19337
Fixing Vicuna text generation (#1579) 2023-04-26 16:20:27 -03:00
oobabooga d87ca8f2af LLaVA fixes 2023-04-26 03:47:34 -03:00
oobabooga 9c2e7c0fab Fix path on models.py 2023-04-26 03:29:09 -03:00
oobabooga a777c058af
Precise prompts for instruct mode 2023-04-26 03:21:53 -03:00
oobabooga a8409426d7
Fix bug in models.py 2023-04-26 01:55:40 -03:00
oobabooga 4c491aa142 Add Alpaca prompt with Input field 2023-04-25 23:50:32 -03:00
oobabooga 68ed73dd89 Make API extension print its exceptions 2023-04-25 23:23:47 -03:00
oobabooga f642135517 Make universal tokenizer, xformers, sdp-attention apply to monkey patch 2023-04-25 23:18:11 -03:00
oobabooga f39c99fa14 Load more than one LoRA with --lora, fix a bug 2023-04-25 22:58:48 -03:00
oobabooga 15940e762e Fix missing initial space for LlamaTokenizer 2023-04-25 22:47:23 -03:00
Vincent Brouwers 92cdb4f22b
Seq2Seq support (including FLAN-T5) (#1535)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-25 22:39:04 -03:00
USBhost 95aa43b9c2
Update LLaMA download docs 2023-04-25 21:28:15 -03:00
Alex "mcmonkey" Goodwin 312cb7dda6
LoRA trainer improvements part 5 (#1546)
* full dynamic model type support on modern peft

* remove shuffle option
2023-04-25 21:27:30 -03:00
Wojtab 65beb51b0b
fix returned dtypes for LLaVA (#1547) 2023-04-25 21:25:34 -03:00
oobabooga 9b272bc8e5 Monkey patch fixes 2023-04-25 21:20:26 -03:00
oobabooga da812600f4 Apply settings regardless of setup() function 2023-04-25 01:16:23 -03:00
da3dsoul ebca3f86d5
Apply the settings for extensions after import, but before setup() (#1484) 2023-04-25 00:23:11 -03:00
oobabooga b0ce750d4e Add spaces 2023-04-25 00:10:21 -03:00
oobabooga 1a0c12c6f2
Refactor text-generation.py a bit 2023-04-24 19:24:12 -03:00
oobabooga 2f4f124132 Remove obsolete function 2023-04-24 13:27:24 -03:00
oobabooga b6af2e56a2 Add --character flag, add character to settings.json 2023-04-24 13:19:42 -03:00
oobabooga 0c32ae27cc Only load the default history if it's empty 2023-04-24 11:50:51 -03:00
MajdajkD c86e9a3372
fix websocket batching (#1511) 2023-04-24 03:51:32 -03:00