Commit graph

344 commits

Author SHA1 Message Date
Thomas Antony 7fa5d96c22 Update to use new llamacpp API 2023-03-30 11:23:05 +01:00
Thomas Antony 79fa2b6d7e Add support for alpaca 2023-03-30 11:23:04 +01:00
Thomas Antony a5f5736e74 Add to text_generation.py 2023-03-30 11:22:38 +01:00
Thomas Antony 7745faa7bb Add llamacpp to models.py 2023-03-30 11:22:37 +01:00
Thomas Antony 7a562481fa Initial version of llamacpp_model.py 2023-03-30 11:22:07 +01:00
oobabooga a21e580782 Move an import 2023-03-29 22:50:58 -03:00
oobabooga 55755e27b9 Don't hardcode prompts in the settings dict/json 2023-03-29 22:47:01 -03:00
oobabooga 1cb9246160 Adapt to the new model names 2023-03-29 21:47:36 -03:00
oobabooga 58349f44a0
Handle training exception for unsupported models 2023-03-29 11:55:34 -03:00
oobabooga a6d0373063
Fix training dataset loading #636 2023-03-29 11:48:17 -03:00
oobabooga 1edfb96778
Fix loading extensions from within the interface 2023-03-28 23:27:02 -03:00
oobabooga 304f812c63 Gracefully handle CUDA out of memory errors with streaming 2023-03-28 19:20:50 -03:00
oobabooga 010b259dde Update documentation 2023-03-28 17:46:00 -03:00
oobabooga 0bec15ebcd Reorder imports 2023-03-28 17:34:15 -03:00
Maya Eary 41ec682834 Disable kernel threshold for gpt-j 2023-03-28 22:45:38 +03:00
Maya 1ac003d41c
Merge branch 'oobabooga:main' into feature/gpt-j-4bit-v2 2023-03-28 22:30:39 +03:00
Maya Eary 1c075d8d21 Fix typo 2023-03-28 20:43:50 +03:00
Maya Eary c8207d474f Generalized load_quantized 2023-03-28 20:38:55 +03:00
oobabooga 8579fe51dd Fix new lines in the HTML tab 2023-03-28 12:59:34 -03:00
Alex "mcmonkey" Goodwin e817fac542 better defaults 2023-03-27 22:29:23 -07:00
Alex "mcmonkey" Goodwin 2e08af4edf implement initial Raw Text File Input
also bump default Rank & Alpha for values that will make sense in testing if you don't know what you're doing and leave the defaults.
2023-03-27 22:15:32 -07:00
Alex "mcmonkey" Goodwin b749952fe3 change number minimums to 0
gradio calculates 'step' relative to the minimum, so at '1' the step values were all offset awkwardly. 0 isn't valid, but, uh, just don't slam the slider to the left.
2023-03-27 21:22:43 -07:00
Alex "mcmonkey" Goodwin ec6224f556 use new shared.args.lora_dir 2023-03-27 20:04:16 -07:00
Alex "mcmonkey" Goodwin 31f04dc615 Merge branch 'main' into add-train-lora-tab 2023-03-27 20:03:30 -07:00
oobabooga 53da672315 Fix FlexGen 2023-03-27 23:44:21 -03:00
oobabooga ee95e55df6 Fix RWKV tokenizer 2023-03-27 23:42:29 -03:00
oobabooga 036163a751 Change description 2023-03-27 23:39:26 -03:00
oobabooga 005f552ea3 Some simplifications 2023-03-27 23:29:52 -03:00
oobabooga fde92048af Merge branch 'main' into catalpaaa-lora-and-model-dir 2023-03-27 23:16:44 -03:00
Alex "mcmonkey" Goodwin 8a97f6ba29 corrections per the PR comments 2023-03-27 18:39:06 -07:00
Alex "mcmonkey" Goodwin 7fab7ea1b6 couple missed camelCases 2023-03-27 18:19:06 -07:00
Alex "mcmonkey" Goodwin 6368dad7db Fix camelCase to snake_case to match repo format standard 2023-03-27 18:17:42 -07:00
oobabooga 2f0571bfa4 Small style changes 2023-03-27 21:24:39 -03:00
oobabooga c2cad30772 Merge branch 'main' into mcmonkey4eva-add-train-lora-tab 2023-03-27 21:05:44 -03:00
Alex "mcmonkey" Goodwin 9ced75746d add total time estimate 2023-03-27 10:57:27 -07:00
Alex "mcmonkey" Goodwin 16ea4fc36d interrupt button 2023-03-27 10:43:01 -07:00
Alex "mcmonkey" Goodwin 8fc723fc95 initial progress tracker in UI 2023-03-27 10:25:08 -07:00
oobabooga 48a6c9513e
Merge pull request #572 from clusterfudge/issues/571
Potential fix for issues/571
2023-03-27 14:06:38 -03:00
Alex "mcmonkey" Goodwin c07bcd0850 add some outputs to indicate progress updates (sorta)
Actual progressbar still needed. Also minor formatting fixes.
2023-03-27 09:41:06 -07:00
oobabooga af65c12900 Change Stop button behavior 2023-03-27 13:23:59 -03:00
Alex "mcmonkey" Goodwin d911c22af9 use shared rows to make the LoRA Trainer interface a bit more compact / clean 2023-03-27 08:31:49 -07:00
Alex "mcmonkey" Goodwin e439228ed8 Merge branch 'main' into add-train-lora-tab 2023-03-27 08:21:19 -07:00
oobabooga 3dc61284d5 Handle unloading LoRA from dropdown menu icon 2023-03-27 00:04:43 -03:00
oobabooga 1c77fdca4c Change notebook mode appearance 2023-03-26 22:20:30 -03:00
oobabooga 49c10c5570
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**

See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
Sean Fitzgerald 0bac80d9eb Potential fix for issues/571 2023-03-25 13:08:45 -07:00
Alex "mcmonkey" Goodwin f1ba2196b1 make 'model' variables less ambiguous 2023-03-25 12:57:36 -07:00
Alex "mcmonkey" Goodwin 8da237223e document options better 2023-03-25 12:48:35 -07:00
Alex "mcmonkey" Goodwin 5c49a0dcd0 fix error from prepare call running twice in a row 2023-03-25 12:37:32 -07:00
Alex "mcmonkey" Goodwin 7bf601107c automatically strip empty data entries (for better alpaca dataset compat) 2023-03-25 12:28:46 -07:00