Commit graph

4842 commits

Author SHA1 Message Date
oobabooga d28a5c9569 Remove unnecessary css 2023-03-31 02:01:13 -03:00
ye7iaserag ec093a5af7
Fix div alignment for long strings 2023-03-31 06:54:24 +02:00
oobabooga 92c7068daf Don't download if --check is specified 2023-03-31 01:31:47 -03:00
oobabooga 3737eafeaa Remove a border and allow more characters per pagination page 2023-03-31 00:48:50 -03:00
oobabooga fd72afd8e7 Increase the textbox sizes 2023-03-31 00:43:00 -03:00
oobabooga f27a66b014 Bump gradio version (make sure to update)
This fixes the textbox shrinking vertically once it reaches
a certain number of lines.
2023-03-31 00:42:26 -03:00
Nikita Skakun 0cc89e7755 Checksum code now activated by --check flag. 2023-03-30 20:06:12 -07:00
ye7iaserag f9940b79dc
Implement character gallery using Dataset 2023-03-31 04:56:49 +02:00
jllllll e4e3c9095d
Add warning for long paths 2023-03-30 20:48:40 -05:00
jllllll 172035d2e1
Minor Correction 2023-03-30 20:44:56 -05:00
jllllll 0b4ee14edc
Attempt to Improve Reliability
Have pip directly download and install backup GPTQ wheel instead of first downloading through curl.
Install bitsandbytes from wheel compiled for Windows from modified source.
Add clarification of minor, intermittent issue to instructions.
Add system32 folder to end of PATH rather than beginning.
Add warning when installed under a path containing spaces.
2023-03-30 20:04:16 -05:00
oobabooga bb69e054a7 Add dummy file 2023-03-30 21:08:50 -03:00
oobabooga 85e4ec6e6b
Download the cuda branch directly 2023-03-30 18:22:48 -03:00
oobabooga 78c0da4a18
Use the cuda branch of gptq-for-llama
Did I do this right @jllllll? This is because the current default branch (triton) is not compatible with Windows.
2023-03-30 18:04:05 -03:00
oobabooga d4a9b5ea97 Remove redundant preset (see the plot in #587) 2023-03-30 17:34:44 -03:00
Nikita Skakun d550c12a3e Fixed the bug with additional bytes.
The issue seems to be with huggingface not reporting the entire size of the model.
Added an error message with instructions if the checksums don't match.
2023-03-30 12:52:16 -07:00
Thomas Antony 7fa5d96c22 Update to use new llamacpp API 2023-03-30 11:23:05 +01:00
Thomas Antony 79fa2b6d7e Add support for alpaca 2023-03-30 11:23:04 +01:00
Thomas Antony 8953a262cb Add llamacpp to requirements.txt 2023-03-30 11:22:38 +01:00
Thomas Antony a5f5736e74 Add to text_generation.py 2023-03-30 11:22:38 +01:00
Thomas Antony 7745faa7bb Add llamacpp to models.py 2023-03-30 11:22:37 +01:00
Thomas Antony 7a562481fa Initial version of llamacpp_model.py 2023-03-30 11:22:07 +01:00
Thomas Antony 53ab1e285d Update .gitignore 2023-03-30 11:22:07 +01:00
Nikita Skakun 297ac051d9 Added sha256 validation of model files. 2023-03-30 02:34:19 -07:00
Nikita Skakun 8c590c2362 Added a 'clean' flag to not resume download. 2023-03-30 00:42:19 -07:00
Nikita Skakun e17af59261 Add support for resuming downloads
This commit adds the ability to resume interrupted downloads by adding a new function to the downloader module. The function uses the HTTP Range header to fetch only the remaining part of a file that wasn't downloaded yet.
2023-03-30 00:21:34 -07:00
oobabooga f0fdab08d3 Increase --chat height 2023-03-30 01:02:11 -03:00
oobabooga bd65940a48 Increase --chat box height 2023-03-30 00:43:49 -03:00
oobabooga 131753fcf5 Save the sha256sum of downloaded models 2023-03-29 23:28:16 -03:00
oobabooga a21e580782 Move an import 2023-03-29 22:50:58 -03:00
oobabooga 55755e27b9 Don't hardcode prompts in the settings dict/json 2023-03-29 22:47:01 -03:00
oobabooga 1cb9246160 Adapt to the new model names 2023-03-29 21:47:36 -03:00
oobabooga 0345e04249 Fix "Unknown argument(s): {'verbose': False}" 2023-03-29 21:17:48 -03:00
oobabooga 9104164297
Merge pull request #618 from nikita-skakun/optimize-download-model
Improve download-model.py progress bar with multiple threads
2023-03-29 20:54:19 -03:00
oobabooga 37754164eb Move argparse 2023-03-29 20:47:36 -03:00
oobabooga 6403e72062 Merge branch 'main' into nikita-skakun-optimize-download-model 2023-03-29 20:45:33 -03:00
oobabooga 1445ea86f7 Add --output and better metadata for downloading models 2023-03-29 20:26:44 -03:00
oobabooga 58349f44a0
Handle training exception for unsupported models 2023-03-29 11:55:34 -03:00
oobabooga a6d0373063
Fix training dataset loading #636 2023-03-29 11:48:17 -03:00
oobabooga 41b58bc47e
Update README.md 2023-03-29 11:02:29 -03:00
oobabooga 0de4f24b12
Merge pull request #4 from jllllll/oobabooga-windows
Change Micromamba download link
2023-03-29 09:49:32 -03:00
jllllll ed0e593161
Change Micromamba download
Changed link to previous version.
This will provide a stable source for Micromamba so that new versions don't cause issues.
2023-03-29 02:47:19 -05:00
oobabooga 3b4447a4fe
Update README.md 2023-03-29 02:24:11 -03:00
oobabooga 5d0b83c341
Update README.md 2023-03-29 02:22:19 -03:00
oobabooga c2a863f87d
Mention the updated one-click installer 2023-03-29 02:11:51 -03:00
oobabooga da3aa8fbda
Merge pull request #2 from jllllll/oobabooga-windows
Update one-click-installer for Windows
2023-03-29 01:55:47 -03:00
oobabooga 1edfb96778
Fix loading extensions from within the interface 2023-03-28 23:27:02 -03:00
Nikita Skakun aaa218a102 Remove unused import. 2023-03-28 18:32:49 -07:00
Nikita Skakun ff515ec2fe Improve progress bar visual style
This commit reverts the performance improvements of the previous commit for for improved visual style of multithreaded progress bars. The style of the progress bar has been modified to take up the same amount of size to align them.
2023-03-28 18:29:20 -07:00
oobabooga 304f812c63 Gracefully handle CUDA out of memory errors with streaming 2023-03-28 19:20:50 -03:00