Commit graph

1516 commits

Author SHA1 Message Date
oobabooga de69a62004 Revert "UI: move "Character" dropdown to the main Chat tab"
This reverts commit 83534798b2.
2024-06-28 15:38:11 -07:00
oobabooga 38d58764db UI: remove unused gr.State variable from the Default tab 2024-06-28 15:17:44 -07:00
oobabooga da196707cf UI: improve the light theme a bit 2024-06-27 21:05:38 -07:00
oobabooga 9dbcb1aeea Small fix to make transformers 4.42 functional 2024-06-27 17:05:29 -07:00
oobabooga 8ec8bc0b85 UI: handle another edge case while streaming lists 2024-06-26 18:40:43 -07:00
oobabooga 0e138e4be1 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2024-06-26 18:30:08 -07:00
mefich a85749dcbe
Update models_settings.py: add default alpha_value, add proper compress_pos_emb for newer GGUFs (#6111) 2024-06-26 22:17:56 -03:00
oobabooga 5fe532a5ce UI: remove DRY info text
It was visible for loaders without DRY.
2024-06-26 15:33:11 -07:00
oobabooga b1187fc9a5 UI: prevent flickering while streaming lists / bullet points 2024-06-25 19:19:45 -07:00
oobabooga 3691451d00
Add back the "Rename chat" feature (#6161) 2024-06-25 22:28:58 -03:00
oobabooga ac3f92d36a UI: store chat history in the browser 2024-06-25 14:18:07 -07:00
oobabooga 46ca15cb79 Minor bug fixes after e7e1f5901e 2024-06-25 11:49:33 -07:00
oobabooga 83534798b2 UI: move "Character" dropdown to the main Chat tab 2024-06-25 11:25:57 -07:00
oobabooga 279cba607f UI: don't show an animation when updating the "past chats" menu 2024-06-25 11:10:17 -07:00
oobabooga 3290edfad9 Bug fix: force chat history to be loaded on launch 2024-06-25 11:06:05 -07:00
oobabooga e7e1f5901e
Prompts in the "past chats" menu (#6160) 2024-06-25 15:01:43 -03:00
oobabooga a43c210617
Improved past chats menu (#6158) 2024-06-25 00:07:22 -03:00
oobabooga 96ba53d916 Handle another fix after 57119c1b30 2024-06-24 15:51:12 -07:00
oobabooga 577a8cd3ee
Add TensorRT-LLM support (#5715) 2024-06-24 02:30:03 -03:00
oobabooga 536f8d58d4 Do not expose alpha_value to llama.cpp & rope_freq_base to transformers
To avoid confusion
2024-06-23 22:09:24 -07:00
oobabooga b48ab482f8 Remove obsolete "gptq_for_llama_info" message 2024-06-23 22:05:19 -07:00
oobabooga 5e8dc56f8a Fix after previous commit 2024-06-23 21:58:28 -07:00
Louis Del Valle 57119c1b30
Update block_requests.py to resolve unexpected type error (500 error) (#5976) 2024-06-24 01:56:51 -03:00
CharlesCNorton 5993904acf
Fix several typos in the codebase (#6151) 2024-06-22 21:40:25 -03:00
GodEmperor785 2c5a9eb597
Change limits of RoPE scaling sliders in UI (#6142) 2024-06-19 21:42:17 -03:00
Guanghua Lu 229d89ccfb
Make logs more readable, no more \u7f16\u7801 (#6127) 2024-06-15 23:00:13 -03:00
Forkoz 1576227f16
Fix GGUFs with no BOS token present, mainly qwen2 models. (#6119)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-06-14 13:51:01 -03:00
oobabooga 10601850d9 Fix after previous commit 2024-06-13 19:54:12 -07:00
oobabooga 0f3a423de1 Alternative solution to "get next logits" deadlock (#6106) 2024-06-13 19:34:16 -07:00
oobabooga 386500aa37 Avoid unnecessary calls UI -> backend, to make it faster 2024-06-12 20:52:42 -07:00
Forkoz 1d79aa67cf
Fix flash-attn UI parameter to actually store true. (#6076) 2024-06-13 00:34:54 -03:00
Belladore 3abafee696
DRY sampler improvements (#6053) 2024-06-12 23:39:11 -03:00
oobabooga a36fa73071 Lint 2024-06-12 19:00:21 -07:00
oobabooga 2d196ed2fe Remove obsolete pre_layer parameter 2024-06-12 18:56:44 -07:00
Belladore 46174a2d33
Fix error when bos_token_id is None. (#6061) 2024-06-12 22:52:27 -03:00
Belladore a363cdfca1
Fix missing bos token for some models (including Llama-3) (#6050) 2024-05-27 09:21:30 -03:00
oobabooga 8df68b05e9 Remove MinPLogitsWarper (it's now a transformers built-in) 2024-05-27 05:03:30 -07:00
oobabooga 4f1e96b9e3 Downloader: Add --model-dir argument, respect --model-dir in the UI 2024-05-23 20:42:46 -07:00
oobabooga ad54d524f7 Revert "Fix stopping strings for llama-3 and phi (#6043)"
This reverts commit 5499bc9bc8.
2024-05-22 17:18:08 -07:00
oobabooga 5499bc9bc8
Fix stopping strings for llama-3 and phi (#6043) 2024-05-22 13:53:59 -03:00
oobabooga 9e189947d1 Minor fix after bd7cc4234d (thanks @belladoreai) 2024-05-21 10:37:30 -07:00
oobabooga ae86292159 Fix getting Phi-3-small-128k-instruct logits 2024-05-21 10:35:00 -07:00
oobabooga bd7cc4234d
Backend cleanup (#6025) 2024-05-21 13:32:02 -03:00
Philipp Emanuel Weidmann 852c943769
DRY: A modern repetition penalty that reliably prevents looping (#5677) 2024-05-19 23:53:47 -03:00
oobabooga 9f77ed1b98
--idle-timeout flag to unload the model if unused for N minutes (#6026) 2024-05-19 23:29:39 -03:00
altoiddealer 818b4e0354
Let grammar escape backslashes (#5865) 2024-05-19 20:26:09 -03:00
Tisjwlf 907702c204
Fix gguf multipart file loading (#5857) 2024-05-19 20:22:09 -03:00
A0nameless0man 5cb59707f3
fix: grammar not support utf-8 (#5900) 2024-05-19 20:10:39 -03:00
Samuel Wein b63dc4e325
UI: Warn user if they are trying to load a model from no path (#6006) 2024-05-19 20:05:17 -03:00
chr 6b546a2c8b
llama.cpp: increase the max threads from 32 to 256 (#5889) 2024-05-19 20:02:19 -03:00
oobabooga a38a37b3b3 llama.cpp: default n_gpu_layers to the maximum value for the model automatically 2024-05-19 10:57:42 -07:00
oobabooga a4611232b7 Make --verbose output less spammy 2024-05-18 09:57:00 -07:00
oobabooga e9c9483171 Improve the logging messages while loading models 2024-05-03 08:10:44 -07:00
oobabooga e61055253c Bump llama-cpp-python to 0.2.69, add --flash-attn option 2024-05-03 04:31:22 -07:00
oobabooga 51fb766bea
Add back my llama-cpp-python wheels, bump to 0.2.65 (#5964) 2024-04-30 09:11:31 -03:00
oobabooga dfdb6fee22 Set llm_int8_enable_fp32_cpu_offload=True for --load-in-4bit
To allow for 32-bit CPU offloading (it's very slow).
2024-04-26 09:39:27 -07:00
oobabooga 70845c76fb
Add back the max_updates_second parameter (#5937) 2024-04-26 10:14:51 -03:00
oobabooga 6761b5e7c6
Improved instruct style (with syntax highlighting & LaTeX rendering) (#5936) 2024-04-26 10:13:11 -03:00
oobabooga 4094813f8d Lint 2024-04-24 09:53:41 -07:00
oobabooga 64e2a9a0a7 Fix the Phi-3 template when used in the UI 2024-04-24 01:34:11 -07:00
oobabooga f0538efb99 Remove obsolete --tensorcores references 2024-04-24 00:31:28 -07:00
Colin f3c9103e04
Revert walrus operator for params['max_memory'] (#5878) 2024-04-24 01:09:14 -03:00
oobabooga 9b623b8a78
Bump llama-cpp-python to 0.2.64, use official wheels (#5921) 2024-04-23 23:17:05 -03:00
oobabooga f27e1ba302
Add a /v1/internal/chat-prompt endpoint (#5879) 2024-04-19 00:24:46 -03:00
oobabooga e158299fb4 Fix loading sharted GGUF models through llamacpp_HF 2024-04-11 14:50:05 -07:00
wangshuai09 fd4e46bce2
Add Ascend NPU support (basic) (#5541) 2024-04-11 18:42:20 -03:00
Ashley Kleynhans 70c637bf90
Fix saving of UI defaults to settings.yaml - Fixes #5592 (#5794) 2024-04-11 18:19:16 -03:00
oobabooga 3e3a7c4250 Bump llama-cpp-python to 0.2.61 & fix the crash 2024-04-11 14:15:34 -07:00
Victorivus c423d51a83
Fix issue #5783 for character images with transparency (#5827) 2024-04-11 02:23:43 -03:00
Alex O'Connell b94cd6754e
UI: Respect model and lora directory settings when downloading files (#5842) 2024-04-11 01:55:02 -03:00
oobabooga 17c4319e2d Fix loading command-r context length metadata 2024-04-10 21:39:59 -07:00
oobabooga cbd65ba767
Add a simple min_p preset, make it the default (#5836) 2024-04-09 12:50:16 -03:00
oobabooga d02744282b Minor logging change 2024-04-06 18:56:58 -07:00
oobabooga dd6e4ac55f Prevent double <BOS_TOKEN> with Command R+ 2024-04-06 13:14:32 -07:00
oobabooga 1bdceea2d4 UI: Focus on the chat input after starting a new chat 2024-04-06 12:57:57 -07:00
oobabooga 168a0f4f67 UI: do not load the "gallery" extension by default 2024-04-06 12:43:21 -07:00
oobabooga 64a76856bd Metadata: Fix loading Command R+ template with multiple options 2024-04-06 07:32:17 -07:00
oobabooga 1b87844928 Minor fix 2024-04-05 18:43:43 -07:00
oobabooga 6b7f7555fc Logging message to make transformers loader a bit more transparent 2024-04-05 18:40:02 -07:00
oobabooga 0f536dd97d UI: Fix the "Show controls" action 2024-04-05 12:18:33 -07:00
oobabooga 308452b783 Bitsandbytes: load preconverted 4bit models without additional flags 2024-04-04 18:10:24 -07:00
oobabooga d423021a48
Remove CTransformers support (#5807) 2024-04-04 20:23:58 -03:00
oobabooga 13fe38eb27 Remove specialized code for gpt-4chan 2024-04-04 16:11:47 -07:00
oobabooga 9ab7365b56 Read rope_theta for DBRX model (thanks turboderp) 2024-04-01 20:25:31 -07:00
oobabooga db5f6cd1d8 Fix ExLlamaV2 loaders using unnecessary "bits" metadata 2024-03-30 21:51:39 -07:00
oobabooga 624faa1438 Fix ExLlamaV2 context length setting (closes #5750) 2024-03-30 21:33:16 -07:00
oobabooga 9653a9176c Minor improvements to Parameters tab 2024-03-29 10:41:24 -07:00
oobabooga 35da6b989d
Organize the parameters tab (#5767) 2024-03-28 16:45:03 -03:00
Yiximail 8c9aca239a
Fix prompt incorrectly set to empty when suffix is empty string (#5757) 2024-03-26 16:33:09 -03:00
oobabooga 2a92a842ce
Bump gradio to 4.23 (#5758) 2024-03-26 16:32:20 -03:00
oobabooga 49b111e2dd Lint 2024-03-17 08:33:23 -07:00
oobabooga d890c99b53 Fix StreamingLLM when content is removed from the beginning of the prompt 2024-03-14 09:18:54 -07:00
oobabooga d828844a6f Small fix: don't save truncation_length to settings.yaml
It should derive from model metadata or from a command-line flag.
2024-03-14 08:56:28 -07:00
oobabooga 2ef5490a36 UI: make light theme less blinding 2024-03-13 08:23:16 -07:00
oobabooga 40a60e0297 Convert attention_sink_size to int (closes #5696) 2024-03-13 08:15:49 -07:00
oobabooga edec3bf3b0 UI: avoid caching convert_to_markdown calls during streaming 2024-03-13 08:14:34 -07:00
oobabooga 8152152dd6 Small fix after 28076928ac 2024-03-11 19:56:35 -07:00
oobabooga 28076928ac
UI: Add a new "User description" field for user personality/biography (#5691) 2024-03-11 23:41:57 -03:00
oobabooga 63701f59cf UI: mention that n_gpu_layers > 0 is necessary for the GPU to be used 2024-03-11 18:54:15 -07:00
oobabooga 46031407b5 Increase the cache size of convert_to_markdown to 4096 2024-03-11 18:43:04 -07:00
oobabooga 9eca197409 Minor logging change 2024-03-11 16:31:13 -07:00
oobabooga afadc787d7 Optimize the UI by caching convert_to_markdown calls 2024-03-10 20:10:07 -07:00
oobabooga 056717923f Document StreamingLLM 2024-03-10 19:15:23 -07:00
oobabooga 15d90d9bd5 Minor logging change 2024-03-10 18:20:50 -07:00
oobabooga cf0697936a Optimize StreamingLLM by over 10x 2024-03-08 21:48:28 -08:00
oobabooga afb51bd5d6
Add StreamingLLM for llamacpp & llamacpp_HF (2nd attempt) (#5669) 2024-03-09 00:25:33 -03:00
oobabooga 549bb88975 Increase height of "Custom stopping strings" UI field 2024-03-08 12:54:30 -08:00
oobabooga 238f69accc Move "Command for chat-instruct mode" to the main chat tab (closes #5634) 2024-03-08 12:52:52 -08:00
oobabooga bae14c8f13 Right-truncate long chat completion prompts instead of left-truncating
Instructions are usually at the beginning of the prompt.
2024-03-07 08:50:24 -08:00
Bartowski 104573f7d4
Update cache_4bit documentation (#5649)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-03-07 13:08:21 -03:00
oobabooga 2ec1d96c91
Add cache_4bit option for ExLlamaV2 (#5645) 2024-03-06 23:02:25 -03:00
oobabooga 2174958362
Revert gradio to 3.50.2 (#5640) 2024-03-06 11:52:46 -03:00
oobabooga d61e31e182
Save the extensions after Gradio 4 (#5632) 2024-03-05 07:54:34 -03:00
oobabooga 63a1d4afc8
Bump gradio to 4.19 (#5522) 2024-03-05 07:32:28 -03:00
oobabooga f697cb4609 Move update_wizard_windows.sh to update_wizard_windows.bat (oops) 2024-03-04 19:26:24 -08:00
kalomaze cfb25c9b3f
Cubic sampling w/ curve param (#5551)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-03-03 13:22:21 -03:00
oobabooga 09b13acfb2 Perplexity evaluation: print to terminal after calculation is finished 2024-02-28 19:58:21 -08:00
oobabooga 4164e29416 Block the "To create a public link, set share=True" gradio message 2024-02-25 15:06:08 -08:00
oobabooga d34126255d Fix loading extensions with "-" in the name (closes #5557) 2024-02-25 09:24:52 -08:00
oobabooga 10aedc329f Logging: more readable messages when renaming chat histories 2024-02-22 07:57:06 -08:00
oobabooga faf3bf2503 Perplexity evaluation: make UI events more robust (attempt) 2024-02-22 07:13:22 -08:00
oobabooga ac5a7a26ea Perplexity evaluation: add some informative error messages 2024-02-21 20:20:52 -08:00
oobabooga 59032140b5 Fix CFG with llamacpp_HF (2nd attempt) 2024-02-19 18:35:42 -08:00
oobabooga c203c57c18 Fix CFG with llamacpp_HF 2024-02-19 18:09:49 -08:00
oobabooga ae05d9830f Replace {{char}}, {{user}} in the chat template itself 2024-02-18 19:57:54 -08:00
oobabooga 1f27bef71b
Move chat UI elements to the right on desktop (#5538) 2024-02-18 14:32:05 -03:00
oobabooga d6bd71db7f ExLlamaV2: fix loading when autosplit is not set 2024-02-17 12:54:37 -08:00
oobabooga af0bbf5b13 Lint 2024-02-17 09:01:04 -08:00
oobabooga a6730f88f7
Add --autosplit flag for ExLlamaV2 (#5524) 2024-02-16 15:26:10 -03:00
oobabooga 4039999be5 Autodetect llamacpp_HF loader when tokenizer exists 2024-02-16 09:29:26 -08:00
oobabooga 76d28eaa9e
Add a menu for customizing the instruction template for the model (#5521) 2024-02-16 14:21:17 -03:00
oobabooga 0e1d8d5601 Instruction template: make "Send to default/notebook" work without a tokenizer 2024-02-16 08:01:07 -08:00
oobabooga 44018c2f69
Add a "llamacpp_HF creator" menu (#5519) 2024-02-16 12:43:24 -03:00
oobabooga b2b74c83a6 Fix Qwen1.5 in llamacpp_HF 2024-02-15 19:04:19 -08:00
oobabooga 080f7132c0
Revert gradio to 3.50.2 (#5513) 2024-02-15 20:40:23 -03:00
oobabooga 7123ac3f77
Remove "Maximum UI updates/second" parameter (#5507) 2024-02-14 23:34:30 -03:00
DominikKowalczyk 33c4ce0720
Bump gradio to 4.19 (#5419)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-02-14 23:28:26 -03:00
oobabooga b16958575f Minor bug fix 2024-02-13 19:48:32 -08:00
oobabooga d47182d9d1
llamacpp_HF: do not use oobabooga/llama-tokenizer (#5499) 2024-02-14 00:28:51 -03:00
oobabooga 069ed7c6ef Lint 2024-02-13 16:05:41 -08:00
oobabooga 86c320ab5a llama.cpp: add a progress bar for prompt evaluation 2024-02-07 21:56:10 -08:00
oobabooga c55b8ce932 Improved random preset generation 2024-02-06 08:51:52 -08:00
oobabooga 4e34ae0587 Minor logging improvements 2024-02-06 08:22:08 -08:00
oobabooga 3add2376cd Better warpers logging 2024-02-06 07:09:21 -08:00
oobabooga 494cc3c5b0 Handle empty sampler priority field, use default values 2024-02-06 07:05:32 -08:00
oobabooga 775902c1f2 Sampler priority: better logging, always save to presets 2024-02-06 06:49:22 -08:00
oobabooga acfbe6b3b3 Minor doc changes 2024-02-06 06:35:01 -08:00
oobabooga 8ee3cea7cb Improve some log messages 2024-02-06 06:31:27 -08:00
oobabooga 8a6d9abb41 Small fixes 2024-02-06 06:26:27 -08:00
oobabooga 2a1063eff5 Revert "Remove non-HF ExLlamaV2 loader (#5431)"
This reverts commit cde000d478.
2024-02-06 06:21:36 -08:00
oobabooga 8c35fefb3b
Add custom sampler order support (#5443) 2024-02-06 11:20:10 -03:00
oobabooga 7301c7618f Minor change to Models tab 2024-02-04 21:49:58 -08:00
oobabooga f234fbe83f Improve a log message after previous commit 2024-02-04 21:44:53 -08:00
oobabooga 7073665a10
Truncate long chat completions inputs (#5439) 2024-02-05 02:31:24 -03:00
oobabooga 9033fa5eee Organize the Model tab 2024-02-04 19:30:22 -08:00
Forkoz 2a45620c85
Split by rows instead of layers for llama.cpp multi-gpu (#5435) 2024-02-04 23:36:40 -03:00
Badis Ghoubali 3df7e151f7
fix the n_batch slider (#5436) 2024-02-04 18:15:30 -03:00
oobabooga 4e188eeb80 Lint 2024-02-03 20:40:10 -08:00
oobabooga cde000d478
Remove non-HF ExLlamaV2 loader (#5431) 2024-02-04 01:15:51 -03:00
kalomaze b6077b02e4
Quadratic sampling (#5403)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-02-04 00:20:02 -03:00
Badis Ghoubali 40c7977f9b
Add roleplay.gbnf grammar (#5368) 2024-01-28 21:41:28 -03:00
sam-ngu c0bdcee646
added trust_remote_code to deepspeed init loaderClass (#5237) 2024-01-26 11:10:57 -03:00
oobabooga 87dc421ee8
Bump exllamav2 to 0.0.12 (#5352) 2024-01-22 22:40:12 -03:00
oobabooga aad73667af Lint 2024-01-22 03:25:55 -08:00
lmg-anon db1da9f98d
Fix logprobs tokens in OpenAI API (#5339) 2024-01-22 08:07:42 -03:00
Forkoz 5c5ef4cef7
UI: change n_gpu_layers maximum to 256 for larger models. (#5262) 2024-01-17 17:13:16 -03:00
ilya sheprut 4d14eb8b82
LoRA: Fix error "Attempting to unscale FP16 gradients" when training (#5268) 2024-01-17 17:11:49 -03:00
oobabooga e055967974
Add prompt_lookup_num_tokens parameter (#5296) 2024-01-17 17:09:36 -03:00
oobabooga b3fc2cd887 UI: Do not save unchanged extension settings to settings.yaml 2024-01-10 03:48:30 -08:00
oobabooga 53dc1d8197 UI: Do not save unchanged settings to settings.yaml 2024-01-09 18:59:04 -08:00
oobabooga 89e7e107fc Lint 2024-01-09 16:27:50 -08:00
mamei16 bec4e0a1ce
Fix update event in refresh buttons (#5197) 2024-01-09 14:49:37 -03:00
oobabooga 4333d82b9d Minor bug fix 2024-01-09 06:55:18 -08:00
oobabooga 953343cced Improve the file saving/deletion menus 2024-01-09 06:33:47 -08:00
oobabooga 123f27a3c5 Load the nearest character after deleting a character
Instead of the first.
2024-01-09 06:24:27 -08:00
oobabooga b908ed318d Revert "Rename past chats -> chat history"
This reverts commit aac93a1fd6.
2024-01-09 05:26:07 -08:00
oobabooga 4ca82a4df9 Save light/dark theme on "Save UI defaults to settings.yaml" 2024-01-09 04:20:10 -08:00
oobabooga 7af50ede94 Reorder some buttons 2024-01-09 04:11:50 -08:00
oobabooga a9f49a7574 Confirm the chat history rename with enter 2024-01-09 04:00:53 -08:00
oobabooga 7bdd2118a2 Change some log messages when deleting files 2024-01-09 03:32:01 -08:00
oobabooga aac93a1fd6 Rename past chats -> chat history 2024-01-09 03:14:30 -08:00
oobabooga 615fa11af8 Move new chat button, improve history deletion handling 2024-01-08 21:22:37 -08:00
oobabooga 4f7e1eeafd
Past chat histories in a side bar on desktop (#5098)
Lots of room for improvement, but that's a start.
2024-01-09 01:57:29 -03:00
oobabooga 372ef5e2d8 Fix dynatemp parameters always visible 2024-01-08 19:42:31 -08:00
oobabooga 29c2693ea0
dynatemp_low, dynatemp_high, dynatemp_exponent parameters (#5209) 2024-01-08 23:28:35 -03:00
oobabooga c4e005efec Fix dropdown menus sometimes failing to refresh 2024-01-08 17:49:54 -08:00
oobabooga 9cd2106303 Revert "Add dynamic temperature to the random preset button"
This reverts commit 4365fb890f.
2024-01-08 16:46:24 -08:00
oobabooga 4365fb890f Add dynamic temperature to the random preset button 2024-01-07 13:08:15 -08:00
oobabooga 0d07b3a6a1
Add dynamic_temperature_low parameter (#5198) 2024-01-07 17:03:47 -03:00
oobabooga b8a0b3f925 Don't print torch tensors with --verbose 2024-01-07 10:35:55 -08:00
oobabooga cf820c69c5 Print generation parameters with --verbose (HF only) 2024-01-07 10:06:23 -08:00
oobabooga c4c7fc4ab3 Lint 2024-01-07 09:36:56 -08:00
kalomaze 48327cc5c4
Dynamic Temperature HF loader support (#5174)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-01-07 10:36:26 -03:00
oobabooga 248742df1c Save extension fields to settings.yaml on "Save UI defaults" 2024-01-04 20:33:42 -08:00
oobabooga c9d814592e Increase maximum temperature value to 5 2024-01-04 17:28:15 -08:00
oobabooga e4d724eb3f Fix cache_folder bug introduced in 37eff915d6 2024-01-04 07:49:40 -08:00
Alberto Cano 37eff915d6
Use --disk-cache-dir for all caches 2024-01-04 00:27:26 -03:00
Lounger 7965f6045e
Fix loading latest history for file names with dots (#5162) 2024-01-03 22:39:41 -03:00
AstrisCantCode b80e6365d0
Fix various bugs for LoRA training (#5161) 2024-01-03 20:42:20 -03:00
oobabooga 7cce88c403 Rmove an unncecessary exception 2024-01-02 07:20:59 -08:00