Commit graph

537 commits

Author SHA1 Message Date
oobabooga e6959a5d9a
Update README.md 2023-05-11 09:54:22 -03:00
oobabooga dcfd09b61e
Update README.md 2023-05-11 09:49:57 -03:00
oobabooga 7a49ceab29
Update README.md 2023-05-11 09:42:39 -03:00
oobabooga 57dc44a995
Update README.md 2023-05-10 12:48:25 -03:00
oobabooga 181b102521
Update README.md 2023-05-10 12:09:47 -03:00
Carl Kenner 814f754451
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596) 2023-05-09 20:37:31 -03:00
Wojtab e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741) 2023-05-09 20:18:02 -03:00
oobabooga 00e333d790 Add MOSS support 2023-05-04 23:20:34 -03:00
oobabooga b6ff138084 Add --checkpoint argument for GPTQ 2023-05-04 15:17:20 -03:00
Ahmed Said fbcd32988e
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative (#1649)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-02 18:25:28 -03:00
oobabooga f39c99fa14 Load more than one LoRA with --lora, fix a bug 2023-04-25 22:58:48 -03:00
oobabooga b6af2e56a2 Add --character flag, add character to settings.json 2023-04-24 13:19:42 -03:00
eiery 78d1977ebf
add n_batch support for llama.cpp (#1115) 2023-04-24 03:46:18 -03:00
Andy Salerno 654933c634
New universal API with streaming/blocking endpoints (#990)
Previous title: Add api_streaming extension and update api-example-stream to use it

* Merge with latest main

* Add parameter capturing encoder_repetition_penalty

* Change some defaults, minor fixes

* Add --api, --public-api flags

* remove unneeded/broken comment from blocking API startup. The comment is already correctly emitted in try_start_cloudflared by calling the lambda we pass in.

* Update on_start message for blocking_api, it should say 'non-streaming' and not 'streaming'

* Update the API examples

* Change a comment

* Update README

* Remove the gradio API

* Remove unused import

* Minor change

* Remove unused import

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-23 15:52:43 -03:00
oobabooga 7438f4f6ba Change GPTQ triton default settings 2023-04-22 12:27:30 -03:00
oobabooga fe02281477
Update README.md 2023-04-22 03:05:00 -03:00
oobabooga 038fa3eb39
Update README.md 2023-04-22 02:46:07 -03:00
oobabooga 505c2c73e8
Update README.md 2023-04-22 00:11:27 -03:00
oobabooga f8da9a0424
Update README.md 2023-04-18 20:25:08 -03:00
oobabooga c3f6e65554
Update README.md 2023-04-18 20:23:31 -03:00
oobabooga eb15193327
Update README.md 2023-04-18 13:07:08 -03:00
oobabooga 7fbfc489e2
Update README.md 2023-04-18 12:56:37 -03:00
oobabooga f559f9595b
Update README.md 2023-04-18 12:54:09 -03:00
loeken 89e22d4d6a
added windows/docker docs (#1027) 2023-04-18 12:47:43 -03:00
oobabooga 8275989f03
Add new 1-click installers for Linux and MacOS 2023-04-18 02:40:36 -03:00
oobabooga 301c687c64
Update README.md 2023-04-17 11:25:26 -03:00
oobabooga 89bc540557 Update README 2023-04-17 10:55:35 -03:00
practicaldreamer 3961f49524
Add note about --no-fused_mlp ignoring --gpu-memory (#1301) 2023-04-17 10:46:37 -03:00
sgsdxzy b57ffc2ec9
Update to support GPTQ triton commit c90adef (#1229) 2023-04-17 01:11:18 -03:00
oobabooga 3e5cdd005f
Update README.md 2023-04-16 23:28:59 -03:00
oobabooga 39099663a0
Add 4-bit LoRA support (#1200) 2023-04-16 23:26:52 -03:00
oobabooga 705121161b
Update README.md 2023-04-16 20:03:03 -03:00
oobabooga 50c55a51fc
Update README.md 2023-04-16 19:22:31 -03:00
Forkoz c6fe1ced01
Add ChatGLM support (#1256)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 19:15:03 -03:00
oobabooga c96529a1b3
Update README.md 2023-04-16 17:00:03 -03:00
oobabooga 004f275efe
Update README.md 2023-04-14 23:36:56 -03:00
oobabooga 83964ed354
Update README.md 2023-04-14 23:33:54 -03:00
oobabooga c41037db68
Update README.md 2023-04-14 23:32:39 -03:00
v0xie 9d66957207
Add --listen-host launch option (#1122) 2023-04-13 21:35:08 -03:00
oobabooga 403be8a27f
Update README.md 2023-04-13 21:23:35 -03:00
Light 97e67d136b Update README.md 2023-04-13 21:00:58 +08:00
Light 15d5a043f2 Merge remote-tracking branch 'origin/main' into triton 2023-04-13 19:38:51 +08:00
oobabooga 7dfbe54f42 Add --model-menu option 2023-04-12 21:24:26 -03:00
MarlinMr 47daf891fe
Link to developer.nvidia.com (#1104) 2023-04-12 15:56:42 -03:00
Light f3591ccfa1 Keep minimal change. 2023-04-12 23:26:06 +08:00
oobabooga 461ca7faf5
Mention that pull request reviews are welcome 2023-04-11 23:12:48 -03:00
oobabooga 749c08a4ff
Update README.md 2023-04-11 14:42:10 -03:00
IggoOnCode 09d8119e3c
Add CPU LoRA training (#938)
(It's very slow)
2023-04-10 17:29:00 -03:00
oobabooga f035b01823
Update README.md 2023-04-10 16:20:23 -03:00
Jeff Lefebvre b7ca89ba3f
Mention that build-essential is required (#1013) 2023-04-10 16:19:10 -03:00
MarkovInequality 992663fa20
Added xformers support to Llama (#950) 2023-04-09 23:08:40 -03:00
oobabooga bce1b7fbb2
Update README.md 2023-04-09 02:19:40 -03:00
oobabooga f7860ce192
Update README.md 2023-04-09 02:19:17 -03:00
oobabooga ece8ed2c84
Update README.md 2023-04-09 02:18:42 -03:00
MarlinMr ec979cd9c4
Use updated docker compose (#877) 2023-04-07 10:48:47 -03:00
MarlinMr 2c0018d946
Cosmetic change of README.md (#878) 2023-04-07 10:47:10 -03:00
oobabooga 848c4edfd5
Update README.md 2023-04-06 22:52:35 -03:00
oobabooga e047cd1def Update README 2023-04-06 22:50:58 -03:00
loeken 08b9d1b23a
creating a layer with Docker/docker-compose (#633) 2023-04-06 22:46:04 -03:00
oobabooga d9e7aba714
Update README.md 2023-04-06 13:42:24 -03:00
oobabooga eec3665845
Add instructions for updating requirements 2023-04-06 13:24:01 -03:00
oobabooga 4a28f39823
Update README.md 2023-04-06 02:47:27 -03:00
eiery 19b516b11b
fix link to streaming api example (#803) 2023-04-05 14:50:23 -03:00
oobabooga 7617ed5bfd
Add AMD instructions 2023-04-05 14:42:58 -03:00
oobabooga 770ef5744f Update README 2023-04-05 14:38:11 -03:00
oobabooga 65d8a24a6d Show profile pictures in the Character tab 2023-04-04 22:28:49 -03:00
oobabooga b24147c7ca Document --pre_layer 2023-04-03 17:34:25 -03:00
oobabooga 525f729b8e
Update README.md 2023-04-02 21:12:41 -03:00
oobabooga 53084241b4
Update README.md 2023-04-02 20:50:06 -03:00
oobabooga b6f817be45
Update README.md 2023-04-01 14:54:10 -03:00
oobabooga 88fa38ac01
Update README.md 2023-04-01 14:49:03 -03:00
oobabooga 4b57bd0d99
Update README.md 2023-04-01 14:38:04 -03:00
oobabooga b53bec5a1f
Update README.md 2023-04-01 14:37:35 -03:00
oobabooga 9160586c04
Update README.md 2023-04-01 14:31:10 -03:00
oobabooga 7ec11ae000
Update README.md 2023-04-01 14:15:19 -03:00
oobabooga 012f4f83b8
Update README.md 2023-04-01 13:55:15 -03:00
oobabooga 2c52310642 Add --threads flag for llama.cpp 2023-03-31 21:18:05 -03:00
oobabooga cbfe0b944a
Update README.md 2023-03-31 17:49:11 -03:00
oobabooga 5c4e44b452
llama.cpp documentation 2023-03-31 15:20:39 -03:00
oobabooga d4a9b5ea97 Remove redundant preset (see the plot in #587) 2023-03-30 17:34:44 -03:00
oobabooga 41b58bc47e
Update README.md 2023-03-29 11:02:29 -03:00
oobabooga 3b4447a4fe
Update README.md 2023-03-29 02:24:11 -03:00
oobabooga 5d0b83c341
Update README.md 2023-03-29 02:22:19 -03:00
oobabooga c2a863f87d
Mention the updated one-click installer 2023-03-29 02:11:51 -03:00
oobabooga 010b259dde Update documentation 2023-03-28 17:46:00 -03:00
oobabooga 036163a751 Change description 2023-03-27 23:39:26 -03:00
oobabooga 30585b3e71 Update README 2023-03-27 23:35:01 -03:00
oobabooga 49c10c5570
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**

See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
oobabooga 70f9565f37
Update README.md 2023-03-25 02:35:30 -03:00
oobabooga 04417b658b
Update README.md 2023-03-24 01:40:43 -03:00
oobabooga 143b5b5edf
Mention one-click-bandaid in the README 2023-03-23 23:28:50 -03:00
oobabooga 6872ffd976
Update README.md 2023-03-20 16:53:14 -03:00
oobabooga dd4374edde Update README 2023-03-19 20:15:15 -03:00
oobabooga 9378754cc7 Update README 2023-03-19 20:14:50 -03:00
oobabooga 7ddf6147ac
Update README.md 2023-03-19 19:25:52 -03:00
oobabooga ddb62470e9 --no-cache and --gpu-memory in MiB for fine VRAM control 2023-03-19 19:21:41 -03:00
oobabooga 0cbe2dd7e9
Update README.md 2023-03-18 12:24:54 -03:00
oobabooga d2a7fac8ea
Use pip instead of conda for pytorch 2023-03-18 11:56:04 -03:00
oobabooga a0b1a30fd5
Specify torchvision/torchaudio versions 2023-03-18 11:23:56 -03:00
oobabooga a163807f86
Update README.md 2023-03-18 03:07:27 -03:00
oobabooga a7acfa4893
Update README.md 2023-03-17 22:57:46 -03:00
oobabooga dc35861184
Update README.md 2023-03-17 21:05:17 -03:00
oobabooga f2a5ca7d49
Update README.md 2023-03-17 20:50:27 -03:00
oobabooga 8c8286b0e6
Update README.md 2023-03-17 20:49:40 -03:00
oobabooga 0c05e65e5c
Update README.md 2023-03-17 20:25:42 -03:00
oobabooga 66e8d12354
Update README.md 2023-03-17 19:59:37 -03:00
oobabooga 9a871117d7
Update README.md 2023-03-17 19:52:22 -03:00
oobabooga d4f38b6a1f
Update README.md 2023-03-17 18:57:48 -03:00
oobabooga ad7c829953
Update README.md 2023-03-17 18:55:01 -03:00
oobabooga 4426f941e0
Update the installation instructions. Tldr use WSL 2023-03-17 18:51:07 -03:00
oobabooga ebef4a510b Update README 2023-03-17 11:58:45 -03:00
oobabooga cdfa787bcb Update README 2023-03-17 11:53:28 -03:00
oobabooga dd1c5963da Update README 2023-03-16 12:45:27 -03:00
oobabooga 445ebf0ba8
Update README.md 2023-03-15 20:06:46 -03:00
oobabooga 09045e4bdb
Add WSL guide 2023-03-15 19:42:06 -03:00
oobabooga 128d18e298
Update README.md 2023-03-14 17:57:25 -03:00
oobabooga 1236c7f971
Update README.md 2023-03-14 17:56:15 -03:00
oobabooga b419dffba3
Update README.md 2023-03-14 17:55:35 -03:00
oobabooga 87192e2813 Update README 2023-03-14 08:02:21 -03:00
oobabooga 3da73e409f Merge branch 'main' into Zerogoki00-opt4-bit 2023-03-14 07:50:36 -03:00
Ayanami Rei b746250b2f Update README 2023-03-13 20:20:45 +03:00
oobabooga 66b6971b61 Update README 2023-03-13 12:44:18 -03:00
oobabooga ddea518e0f Document --auto-launch 2023-03-13 12:43:33 -03:00
oobabooga d97bfb8713
Update README.md 2023-03-13 12:39:33 -03:00
oobabooga bdff37f0bb
Update README.md 2023-03-13 11:05:51 -03:00
oobabooga d168b6e1f7
Update README.md 2023-03-12 17:54:07 -03:00
oobabooga 54e8f0c31f
Update README.md 2023-03-12 16:58:00 -03:00
oobabooga 0a4d8a5cf6
Delete README.md 2023-03-12 16:43:06 -03:00
oobabooga 0b86ac38b1
Initial commit 2023-03-12 16:40:10 -03:00
oobabooga 3375eaece0 Update README 2023-03-12 15:01:32 -03:00
oobabooga 17210ff88f
Update README.md 2023-03-12 14:31:24 -03:00
oobabooga 4dc1d8c091
Update README.md 2023-03-12 12:46:53 -03:00
oobabooga 89e9493509 Update README 2023-03-12 11:23:20 -03:00
draff 28fd4fc970 Change wording to be consistent with other args 2023-03-10 23:34:13 +00:00
draff 804486214b Re-implement --load-in-4bit and update --llama-bits arg description 2023-03-10 23:21:01 +00:00
draff e6c631aea4 Replace --load-in-4bit with --llama-bits
Replaces --load-in-4bit with a more flexible --llama-bits arg to allow for 2 and 3 bit models as well. This commit also fixes a loading issue with .pt files which are not in the root of the models folder
2023-03-10 21:36:45 +00:00
oobabooga 7c3d1b43c1
Merge pull request #204 from MichealC0/patch-1
Update README.md
2023-03-09 23:04:09 -03:00
oobabooga 1a3d25f75d
Merge pull request #206 from oobabooga/llama-4bit
Add LLaMA 4-bit support
2023-03-09 21:07:32 -03:00
oobabooga eb0cb9b6df Update README 2023-03-09 20:53:52 -03:00
oobabooga d41e3c233b
Update README.md 2023-03-09 18:02:44 -03:00
oobabooga 33414478bf Update README 2023-03-09 11:13:03 -03:00
oobabooga e7adf5fe4e Add Contrastive Search preset #197 2023-03-09 10:27:11 -03:00
Chimdumebi Nebolisa 4dd14dcab4
Update README.md 2023-03-09 10:22:09 +01:00
oobabooga b4bfd87319
Update README.md 2023-03-06 20:55:01 -03:00
oobabooga d0e8780555
Update README.md 2023-03-06 20:17:59 -03:00
oobabooga 18ccfcd7fe
Update README.md 2023-03-06 20:15:55 -03:00
oobabooga 91823e1ed1
Update README.md 2023-03-06 16:48:31 -03:00
oobabooga aa7ce0665e Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-03-06 10:58:41 -03:00
oobabooga bf56b6c1fb Load settings.json without the need for --settings settings.json
This is for setting UI defaults
2023-03-06 10:57:45 -03:00
oobabooga 2de9f122cd
Update README.md 2023-03-06 09:34:49 -03:00
oobabooga 736f61610b Update README 2023-03-04 01:33:52 -03:00
oobabooga 76378c6cc2 Update README 2023-03-02 11:27:15 -03:00
oobabooga f4b130e2bd
Update README.md 2023-02-27 15:15:45 -03:00
oobabooga c183d2917c
Update README.md 2023-02-26 00:59:07 -03:00
oobabooga cfe010b244
Update README.md 2023-02-26 00:54:37 -03:00
oobabooga 87d9f3e329
Update README.md 2023-02-26 00:54:19 -03:00
oobabooga 955997a90b
Update README.md 2023-02-26 00:54:07 -03:00
oobabooga c593dfa827
Update README.md 2023-02-25 18:57:34 -03:00
oobabooga 7872a64f78
Update README.md 2023-02-25 18:56:43 -03:00
oobabooga 88cfc84ddb Update README 2023-02-25 01:33:26 -03:00
oobabooga 0b90e0b3b6
Update README.md 2023-02-24 12:01:07 -03:00
oobabooga 1a23e6d185
Add Pythia to README 2023-02-24 11:38:01 -03:00
oobabooga f4f508c8e2
Update README.md 2023-02-24 09:03:09 -03:00
oobabooga ced5d9ab04
Update README.md 2023-02-23 10:04:07 -03:00
oobabooga b18071330f
Update README.md 2023-02-23 01:32:05 -03:00
oobabooga b4a7f5fa70
Update README.md 2023-02-22 01:54:12 -03:00
oobabooga e195377050 Deprecate torch dumps, move to safetensors (they load even faster) 2023-02-20 15:03:19 -03:00
oobabooga 58520a1f75
Update README.md 2023-02-20 12:44:31 -03:00
oobabooga 05e9da0c12
Update README.md 2023-02-18 22:34:51 -03:00
oobabooga b1add0e586
Update README.md 2023-02-18 22:32:16 -03:00
oobabooga 348acdf626 Mention deepspeed in the README 2023-02-16 17:29:48 -03:00
oobabooga 05b53e4626 Update README 2023-02-15 14:43:34 -03:00
oobabooga ed73d00bd5 Update README 2023-02-15 14:43:13 -03:00
oobabooga 30fcb26737 Update README 2023-02-15 14:42:41 -03:00
oobabooga 5ee9283cae Mention BLIP 2023-02-15 13:53:38 -03:00
oobabooga 8d3b3959e7 Document --picture option 2023-02-15 13:50:18 -03:00
oobabooga 01e5772302
Update README.md 2023-02-14 13:06:26 -03:00
oobabooga 210c918199
Update README.md 2023-02-13 21:49:19 -03:00
oobabooga b7ddcab53a
Update README.md 2023-02-13 15:52:49 -03:00
oobabooga 939e9d00a2
Update README.md 2023-02-12 00:47:03 -03:00
oobabooga bf9dd8f8ee Add --text-only option to the download script 2023-02-12 00:42:56 -03:00
oobabooga 42cc307409
Update README.md 2023-02-12 00:34:55 -03:00
oobabooga 144857acfe Update README 2023-02-11 14:49:11 -03:00
oobabooga 0dd1409f24 Add penalty_alpha parameter (contrastive search) 2023-02-11 14:48:12 -03:00
oobabooga 8aafb55693
1-click installer now also works for AMD GPUs
(I think)
2023-02-11 14:24:47 -03:00
oobabooga 1176d64b13
Update README.md 2023-02-11 07:56:12 -03:00
Spencer-Dawson c5324d653b
re-added missed README changes 2023-02-11 00:13:06 -07:00
oobabooga cf89ef1c74
Update README.md 2023-02-10 21:46:29 -03:00
oobabooga 8782ac1911
Update README.md 2023-02-10 17:10:27 -03:00
oobabooga 7d7cc37560
Add Linux 1-click installer 2023-02-10 17:09:53 -03:00
oobabooga 991de5ed40
Update README.md 2023-02-09 14:36:47 -03:00
oobabooga 04d3d0aee6
Add 1-click windows installer (for #45) 2023-02-09 13:27:30 -03:00
oobabooga a21620fc59 Update README 2023-02-08 01:17:50 -03:00
oobabooga fc0493d885 Add credits 2023-02-08 00:09:41 -03:00
oobabooga 53af062fa5
Update README.md 2023-02-05 23:14:25 -03:00
oobabooga 90bb2caffd
Update README.md 2023-02-03 19:45:11 -03:00
oobabooga 93b0d1b1b8
Update README.md 2023-02-03 10:14:52 -03:00
oobabooga 6212b41930 Update README 2023-02-03 09:13:14 -03:00
oobabooga 7f4315b120
Mention 8bit fix for Windows users
Closes #44, #20
2023-02-02 11:00:57 -03:00
oobabooga 7aa3d6583e
Update README.md 2023-01-30 09:45:31 -03:00
oobabooga 239f96a9c5
Add extensions guide 2023-01-30 09:44:57 -03:00
oobabooga f92996b3c8
Update README.md 2023-01-29 14:37:05 -03:00
oobabooga 9e4db10cd0
Update README.md 2023-01-29 03:13:22 -03:00
oobabooga 1a139664f5 Grammar 2023-01-29 02:54:36 -03:00
oobabooga 89c862b179 Update README 2023-01-28 20:37:43 -03:00
oobabooga 6b5dcd46c5 Add support for extensions
This is experimental.
2023-01-27 00:40:39 -03:00
oobabooga d49710878e
Update README.md 2023-01-26 16:17:15 -03:00
oobabooga ff180f3e60
Update installation instructions (for #15) 2023-01-26 12:32:21 -03:00
oobabooga 61611197e0 Add --verbose option (oops) 2023-01-26 02:18:06 -03:00
oobabooga 64f278d248 Update README 2023-01-25 19:43:30 -03:00
oobabooga b77933d327 File names must be img_me.jpg and img_bot.jpg 2023-01-25 19:40:30 -03:00
oobabooga fc73188ec7 Allow specifying your own profile picture in chat mode 2023-01-25 19:37:44 -03:00
oobabooga 651eb50dd1 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-01-25 10:20:03 -03:00
oobabooga 3b8f0021cc Stop generating at \nYou: in chat mode 2023-01-25 10:17:55 -03:00
oobabooga befabbb862
Update README.md 2023-01-24 23:01:43 -03:00
oobabooga 7b98971df1
Update README.md 2023-01-24 18:39:29 -03:00
oobabooga 86b237eb0b
Update README.md 2023-01-24 09:29:34 -03:00
oobabooga c2d1f04305
Update README.md 2023-01-23 23:34:20 -03:00
oobabooga 6c40f7eeb4 New NovelAI/KoboldAI preset selection 2023-01-23 20:44:27 -03:00
oobabooga 22845ba445 Update README 2023-01-23 13:41:50 -03:00
oobabooga 5b60691367 Update README 2023-01-23 10:05:25 -03:00
oobabooga 085d5cbcb9 Update README 2023-01-23 10:03:19 -03:00
oobabooga 8a68930220 Update README 2023-01-23 10:02:35 -03:00
oobabooga 3c5454c6f8 Update README 2023-01-22 19:38:07 -03:00
oobabooga 8f37383dc6 Update README 2023-01-22 19:29:24 -03:00
oobabooga 9592e7e618 Update README 2023-01-22 19:28:03 -03:00
oobabooga c410fe5ab8 Add simplified colab notebook 2023-01-22 19:27:30 -03:00
oobabooga 41806fe40d
Update chat screenshot 2023-01-22 17:28:51 -03:00
oobabooga 00f3b0996b Warn the user that chat mode becomes a lot slower with text streaming 2023-01-22 16:19:11 -03:00
oobabooga 5b50db9dee Mention pygmalion support 2023-01-22 01:30:55 -03:00
oobabooga 6b06fca2f1 Mention pygmalion support 2023-01-22 01:28:52 -03:00
oobabooga 434d4b128c Add refresh buttons for the model/preset/character menus 2023-01-22 00:02:46 -03:00
oobabooga bc664ecf3b Update the installation instructions for low attention span people 2023-01-21 22:54:35 -03:00
oobabooga f9dbe7e08e Update README 2023-01-21 03:05:55 -03:00
oobabooga 990ee54ddd Move the example dialogue to the chat history, and keep it hidden.
This greatly improves the performance of text generation, as
histories can be quite long. It also makes more sense to implement
it this way.
2023-01-21 02:48:06 -03:00
oobabooga d7299df01f Rename parameters 2023-01-21 00:33:41 -03:00
oobabooga 5df03bf0fd
Merge branch 'main' into main 2023-01-21 00:25:34 -03:00
oobabooga faaafe7c0e Better parameter naming 2023-01-20 23:45:16 -03:00
Silver267 f4634e4c32 Update. 2023-01-20 17:05:43 -05:00
oobabooga 83584ae2d7 Clearer installation instructions 2023-01-20 00:20:35 -03:00
oobabooga 8d4170826f Update README 2023-01-19 21:08:26 -03:00
oobabooga cd7b07239f Add Colab guide 2023-01-19 17:58:04 -03:00
oobabooga 83808171d3 Add --share option for Colab 2023-01-19 17:31:29 -03:00
oobabooga b054367be2 Update README 2023-01-19 16:54:58 -03:00
oobabooga f9faad4cfa Add low VRAM guide 2023-01-19 11:25:17 -03:00
oobabooga 7ace04864a Implement sending layers to disk with --disk (#10) 2023-01-19 11:09:24 -03:00
oobabooga 1ce95ee817 Mention text streaming 2023-01-19 10:46:41 -03:00
oobabooga 93fa9bbe01 Clean up the streaming implementation 2023-01-19 10:43:05 -03:00
oobabooga 6456777b09 Clean things up 2023-01-16 16:35:45 -03:00
oobabooga 99d24bdbfe
Update README.md 2023-01-16 11:23:45 -03:00