Commit graph

48 commits

Author SHA1 Message Date
oobabooga 193fe18c8c Resolve conflicts 2023-09-21 17:45:11 -07:00
oobabooga df39f455ad Merge remote-tracking branch 'second-repo/main' into merge-second-repo 2023-09-21 17:39:54 -07:00
oobabooga fc2b831692 Basic changes 2023-09-21 15:55:09 -07:00
oobabooga b04b3957f9 Move one-click-installers into the repository 2023-09-21 15:35:53 -07:00
oobabooga b74bf5638b Install extensions dependencies before webui dependencies
webui takes precedence over extensions.
2023-08-14 09:15:25 -07:00
jllllll 28e3ce4317
Simplify GPTQ-for-LLaMa installation (#122) 2023-08-10 13:19:47 -03:00
oobabooga fa4a948b38
Allow users to write one flag per line in CMD_FLAGS.txt 2023-08-09 01:58:23 -03:00
oobabooga 601fc424cd
Several improvements (#117) 2023-08-03 14:39:46 -03:00
jllllll aca5679968
Properly fix broken gcc_linux-64 package (#115) 2023-08-02 23:39:07 -03:00
jllllll ecd92d6a4e
Remove unused variable from ROCm GPTQ install (#107) 2023-07-26 22:16:36 -03:00
jllllll 1e3c950c7d
Add AMD GPU support for Linux (#98) 2023-07-26 17:33:02 -03:00
jllllll 52e3b91f5e
Fix broken gxx_linux-64 package. (#106) 2023-07-26 01:55:08 -03:00
oobabooga cc2ed46d44
Make chat the default again 2023-07-20 18:55:09 -03:00
jllllll fcb215fed5
Add check for compute support for GPTQ-for-LLaMa (#104)
Installs from main cuda repo if fork not supported
Also removed cuBLAS llama-cpp-python installation in preperation for 4b19b74e6c
2023-07-20 11:11:00 -03:00
jllllll 4df3f72753
Fix GPTQ fail message not being shown on update (#103) 2023-07-19 22:25:09 -03:00
jllllll 11a8fd1eb9
Add cuBLAS llama-cpp-python wheel installation (#102)
Parses requirements.txt using regex to determine required version.
2023-07-16 01:31:33 -03:00
oobabooga bb79037ebd
Fix wrong pytorch version on Linux+CPU
It was installing nvidia wheels
2023-07-07 20:40:31 -03:00
oobabooga 564a8c507f
Don't launch chat mode by default 2023-07-07 13:32:11 -03:00
jllllll eac8450ef7
Move special character check to start script (#92)
Also port print_big_message function to batch
2023-06-24 10:06:35 -03:00
jllllll 04cae3e5db
Remove bitsandbytes compatibility workaround (#91)
New bnb does not need it.
Commented out in case it is needed in the futute.
2023-06-21 15:40:41 -03:00
oobabooga 80a615c3ae
Add space 2023-06-20 22:48:45 -03:00
oobabooga a2116e8b2b
use uninstall -y 2023-06-20 21:24:01 -03:00
oobabooga c0a1baa46e
Minor changes 2023-06-20 20:23:21 -03:00
jllllll 5cbc0b28f2
Workaround for Peft not updating their package version on the git repo (#88)
* Workaround for Peft not updating their git package version

* Update webui.py

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-06-20 20:21:10 -03:00
jllllll 9bb2fc8cd7
Install Pytorch through pip instead of Conda (#84) 2023-06-20 16:39:23 -03:00
jllllll b1d05cbbf6
Install exllama (#83)
* Install exllama

* Handle updating exllama
2023-06-17 19:10:36 -03:00
jllllll b2483e28d1
Check for special characters in path on Windows (#81)
Display warning message if detected
2023-06-17 19:09:22 -03:00
oobabooga 5540335819 Better way to detect if a model has been downloaded 2023-06-01 14:01:19 -03:00
oobabooga 248ef32358 Print a big message for CPU users 2023-06-01 01:40:24 -03:00
oobabooga 290a3374e4 Don't download a model during installation
And some other updates/minor improvements
2023-06-01 01:30:21 -03:00
Sam dea1bf3d04
Parse g++ version instead of using string matching (#72) 2023-05-31 14:44:36 -03:00
gavin660 97bc7e3fb6
Adds functionality for user to set flags via environment variable (#59) 2023-05-31 14:43:22 -03:00
Sam 5405635305
Install pre-compiled wheels for Linux (#74) 2023-05-31 14:41:54 -03:00
jllllll be98e74337
Install older bitsandbytes on older gpus + fix llama-cpp-python issue (#75) 2023-05-31 14:41:03 -03:00
oobabooga c8ce2e777b
Add instructions for CPU mode users 2023-05-25 10:57:52 -03:00
oobabooga 996c49daa7
Remove bitsandbytes installation step
Following 548f05e106
2023-05-25 10:50:20 -03:00
jllllll 4ef2de3486
Fix dependencies downgrading from gptq install (#61) 2023-05-18 12:46:04 -03:00
oobabooga 07510a2414
Change a message 2023-05-18 10:58:37 -03:00
oobabooga 0bcd5b6894
Soothe anxious users 2023-05-18 10:56:49 -03:00
oobabooga 1309cdd257
Add a space 2023-05-10 18:03:12 -03:00
oobabooga 3e19733d35
Remove obsolete comment 2023-05-10 18:01:04 -03:00
oobabooga d7d3f7f31c
Add a "CMD_FLAGS" variable 2023-05-10 17:54:12 -03:00
oobabooga b8cfc20e58
Don't install superbooga by default 2023-05-09 14:17:08 -03:00
Semjon Kravtšenko 126d216384
Fix possible crash (#53) 2023-05-06 01:14:09 -03:00
Blake Wyatt 4babb22f84
Fix/Improve a bunch of things (#42) 2023-05-02 12:28:20 -03:00
oobabooga a4f6724b88
Add a comment 2023-04-24 16:47:22 -03:00
oobabooga 9a8487097b
Remove --auto-devices 2023-04-24 16:43:52 -03:00
Blake Wyatt 6d2c72b593
Add support for MacOS, Linux, and WSL (#21)
* Initial commit

* Initial commit with new code

* Add comments

* Move GPTQ out of if

* Fix install on Arch Linux

* Fix case where install was aborted

If the install was aborted before a model was downloaded, webui wouldn't run.

* Update start_windows.bat

Add necessary flags to Miniconda installer
Disable Start Menu shortcut creation
Disable ssl on Conda
Change Python version to latest 3.10,
I've noticed that explicitly specifying 3.10.9 can break the included Python installation

* Update bitsandbytes wheel link to 0.38.1

Disable ssl on Conda

* Add check for spaces in path

Installation of Miniconda will fail in this case

* Mirror changes to mac and linux scripts

* Start with model-menu

* Add updaters

* Fix line endings

* Add check for path with spaces

* Fix one-click updating

* Fix one-click updating

* Clean up update scripts

* Add environment scripts

---------

Co-authored-by: jllllll <3887729+jllllll@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-18 02:23:09 -03:00