Commit graph

18 commits

Author SHA1 Message Date
oobabooga 1d1d9e40cd Add seed to settings 2023-03-31 12:22:07 -03:00
oobabooga 5a6f939f05 Change the preset here too 2023-03-31 10:43:05 -03:00
oobabooga 55755e27b9 Don't hardcode prompts in the settings dict/json 2023-03-29 22:47:01 -03:00
oobabooga 1cb9246160 Adapt to the new model names 2023-03-29 21:47:36 -03:00
oobabooga c5ebcc5f7e
Change the default names (#518)
* Update shared.py

* Update settings-template.json
2023-03-23 13:36:00 -03:00
oobabooga c753261338 Disable stop_at_newline by default 2023-03-18 10:55:57 -03:00
oobabooga 7d97287e69 Update settings-template.json 2023-03-17 11:41:12 -03:00
oobabooga 214dc6868e Several QoL changes related to LoRA 2023-03-17 11:24:52 -03:00
oobabooga 0ac562bdba Add a default prompt for OpenAssistant oasst-sft-1-pythia-12b #253 2023-03-12 10:46:16 -03:00
oobabooga 169209805d Model-aware prompts and presets 2023-03-02 11:25:04 -03:00
oobabooga 7c2babfe39 Rename greed to "generation attempts" 2023-02-25 01:42:19 -03:00
oobabooga 7be372829d Set chat prompt size in tokens 2023-02-15 10:18:50 -03:00
oobabooga d0ea6d5f86 Make the maximum history size in prompt unlimited by default 2023-01-22 17:17:35 -03:00
oobabooga deacb96c34 Change the pygmalion default context 2023-01-22 00:49:59 -03:00
oobabooga 185587a33e Add a history size parameter to the chat
If too many messages are used in the prompt, the model
gets really slow. It is useful to have the ability to
limit this.
2023-01-20 17:03:09 -03:00
oobabooga e61138bdad Minor fixes 2023-01-19 19:04:54 -03:00
oobabooga c6083f3dca Fix the template 2023-01-15 15:57:00 -03:00
oobabooga 88d67427e1 Implement default settings customization using a json file 2023-01-15 15:23:41 -03:00