Commit graph

39 commits

Author SHA1 Message Date
oobabooga c07215cc08 Improve the default Assistant character 2023-05-15 19:39:08 -03:00
oobabooga 3b886f9c9f
Add chat-instruct mode (#2049) 2023-05-14 10:43:55 -03:00
oobabooga e283ddc559 Change how spaces are handled in continue/generation attempts 2023-05-12 12:50:29 -03:00
oobabooga bdf1274b5d Remove duplicate code 2023-05-10 01:34:04 -03:00
minipasila 334486f527
Added instruct-following template for Metharme (#1679) 2023-05-09 22:29:22 -03:00
Carl Kenner 814f754451
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596) 2023-05-09 20:37:31 -03:00
LaaZa 218bd64bd1
Add the option to not automatically load the selected model (#1762)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-09 15:52:35 -03:00
oobabooga b5260b24f1
Add support for custom chat styles (#1917) 2023-05-08 12:35:03 -03:00
oobabooga a777c058af
Precise prompts for instruct mode 2023-04-26 03:21:53 -03:00
oobabooga b6af2e56a2 Add --character flag, add character to settings.json 2023-04-24 13:19:42 -03:00
Wojtab 12212cf6be
LLaVA support (#1487) 2023-04-23 20:32:22 -03:00
oobabooga 702fe92d42 Increase truncation_length_max value 2023-04-19 17:35:38 -03:00
oobabooga b937c9d8c2
Add skip_special_tokens checkbox for Dolly model (#1218) 2023-04-16 14:24:49 -03:00
oobabooga 8e31f2bad4
Automatically set wbits/groupsize/instruct based on model name (#1167) 2023-04-14 11:07:28 -03:00
oobabooga 388038fb8e Update settings-template.json 2023-04-12 18:30:43 -03:00
oobabooga 1566d8e344 Add model settings to the Models tab 2023-04-12 17:20:18 -03:00
oobabooga cacbcda208
Two new options: truncation length and ban eos token 2023-04-11 18:46:06 -03:00
catalpaaa 78bbc66fc4
allow custom stopping strings in all modes (#903) 2023-04-11 12:30:06 -03:00
oobabooga 85a7954823 Update settings-template.json 2023-04-10 16:53:07 -03:00
oobabooga 0f1627eff1 Don't treat Intruct mode histories as regular histories
* They must now be saved/loaded manually
* Also improved browser caching of pfps
* Also changed the global default preset
2023-04-10 15:48:07 -03:00
oobabooga 4c9ed09270 Update settings template 2023-04-03 14:59:26 -03:00
oobabooga 1d1d9e40cd Add seed to settings 2023-03-31 12:22:07 -03:00
oobabooga 5a6f939f05 Change the preset here too 2023-03-31 10:43:05 -03:00
oobabooga 55755e27b9 Don't hardcode prompts in the settings dict/json 2023-03-29 22:47:01 -03:00
oobabooga 1cb9246160 Adapt to the new model names 2023-03-29 21:47:36 -03:00
oobabooga c5ebcc5f7e
Change the default names (#518)
* Update shared.py

* Update settings-template.json
2023-03-23 13:36:00 -03:00
oobabooga c753261338 Disable stop_at_newline by default 2023-03-18 10:55:57 -03:00
oobabooga 7d97287e69 Update settings-template.json 2023-03-17 11:41:12 -03:00
oobabooga 214dc6868e Several QoL changes related to LoRA 2023-03-17 11:24:52 -03:00
oobabooga 0ac562bdba Add a default prompt for OpenAssistant oasst-sft-1-pythia-12b #253 2023-03-12 10:46:16 -03:00
oobabooga 169209805d Model-aware prompts and presets 2023-03-02 11:25:04 -03:00
oobabooga 7c2babfe39 Rename greed to "generation attempts" 2023-02-25 01:42:19 -03:00
oobabooga 7be372829d Set chat prompt size in tokens 2023-02-15 10:18:50 -03:00
oobabooga d0ea6d5f86 Make the maximum history size in prompt unlimited by default 2023-01-22 17:17:35 -03:00
oobabooga deacb96c34 Change the pygmalion default context 2023-01-22 00:49:59 -03:00
oobabooga 185587a33e Add a history size parameter to the chat
If too many messages are used in the prompt, the model
gets really slow. It is useful to have the ability to
limit this.
2023-01-20 17:03:09 -03:00
oobabooga e61138bdad Minor fixes 2023-01-19 19:04:54 -03:00
oobabooga c6083f3dca Fix the template 2023-01-15 15:57:00 -03:00
oobabooga 88d67427e1 Implement default settings customization using a json file 2023-01-15 15:23:41 -03:00