oobabooga
|
a156ebbf76
|
Lint
|
2025-10-15 13:15:01 -07:00 |
|
oobabooga
|
25360387ec
|
Downloader: Fix resuming downloads after HF moved to Xet
|
2025-10-10 08:27:40 -07:00 |
|
oobabooga
|
bf5d85c922
|
Revert "Downloader: Gracefully handle '416 Range Not Satisfiable' when continuing downloads"
This reverts commit 1aa2b924d2.
|
2025-10-09 17:22:41 -07:00 |
|
oobabooga
|
1aa2b924d2
|
Downloader: Gracefully handle '416 Range Not Satisfiable' when continuing downloads
|
2025-10-09 10:52:31 -07:00 |
|
oobabooga
|
9c6913ad61
|
Show file sizes on "Get file list"
|
2025-06-18 21:35:07 -07:00 |
|
oobabooga
|
aa44e542cb
|
Revert "Safer usage of mkdir across the project"
This reverts commit 0d1597616f.
|
2025-06-17 07:11:59 -07:00 |
|
oobabooga
|
0d1597616f
|
Safer usage of mkdir across the project
|
2025-06-17 07:09:33 -07:00 |
|
oobabooga
|
92adceb7b5
|
UI: Fix the model downloader progress bar
|
2025-06-01 19:22:21 -07:00 |
|
oobabooga
|
d9de14d1f7
|
Restructure the repository (#6904)
|
2025-04-26 08:56:54 -03:00 |
|
oobabooga
|
3d4f3e423c
|
Downloader: Make progress bars not jump around
Adapted from: https://gist.github.com/NiklasBeierl/13096bfdd8b2084da8c1163dd06f91d3
|
2025-01-25 07:44:24 -08:00 |
|
Jack Cloudman
|
d3adcbf64b
|
Add --exclude-pattern flag to download-model.py script (#6542)
|
2025-01-08 17:30:21 -03:00 |
|
oobabooga
|
f106e780ba
|
downloader: use 1 session for all files for better speed
|
2024-08-06 19:41:12 -07:00 |
|
oobabooga
|
f4d95f33b8
|
downloader: better progress bar
|
2024-07-28 22:21:56 -07:00 |
|
oobabooga
|
4f1e96b9e3
|
Downloader: Add --model-dir argument, respect --model-dir in the UI
|
2024-05-23 20:42:46 -07:00 |
|
oobabooga
|
e225b0b995
|
downloader: fix downloading 01-ai/Yi-1.5-34B-Chat
|
2024-05-12 10:43:50 -07:00 |
|
oobabooga
|
0b193b8553
|
Downloader: handle one more retry case after 5770e06c48
|
2024-05-04 19:25:22 -07:00 |
|
oobabooga
|
5770e06c48
|
Add a retry mechanism to the model downloader (#5943)
|
2024-04-27 12:25:28 -03:00 |
|
zaypen
|
a90509d82e
|
Model downloader: Take HF_ENDPOINT in consideration (#5571)
|
2024-04-11 18:28:10 -03:00 |
|
oobabooga
|
830168d3d4
|
Revert "Replace hashlib.sha256 with hashlib.file_digest so we don't need to load entire files into ram before hashing them. (#4383)"
This reverts commit 0ced78fdfa.
|
2024-02-26 05:54:33 -08:00 |
|
oobabooga
|
f465b7b486
|
Downloader: start one session per file (#5520)
|
2024-02-16 12:55:27 -03:00 |
|
oobabooga
|
44018c2f69
|
Add a "llamacpp_HF creator" menu (#5519)
|
2024-02-16 12:43:24 -03:00 |
|
oobabooga
|
ee65f4f014
|
Downloader: don't assume that huggingface_hub is installed
|
2024-01-30 09:14:11 -08:00 |
|
Anthony Guijarro
|
828be63f2c
|
Downloader: use HF get_token function (#5381)
|
2024-01-27 17:13:09 -03:00 |
|
oobabooga
|
7bbe7e803a
|
Minor fix
|
2023-12-08 05:01:25 -08:00 |
|
oobabooga
|
d516815c9c
|
Model downloader: download only fp16 if both fp16 and GGUF are present
|
2023-12-05 21:09:12 -08:00 |
|
oobabooga
|
510a01ef46
|
Lint
|
2023-11-16 18:03:06 -08:00 |
|
LightningDragon
|
0ced78fdfa
|
Replace hashlib.sha256 with hashlib.file_digest so we don't need to load entire files into ram before hashing them. (#4383)
|
2023-10-25 12:15:34 -03:00 |
|
oobabooga
|
613feca23b
|
Make colab functional for llama.cpp
- Download only Q4_K_M for GGUF repositories by default
- Use maximum n-gpu-layers by default
|
2023-10-22 09:08:25 -07:00 |
|
oobabooga
|
cd45635f53
|
tqdm improvement for colab
|
2023-10-21 22:00:29 -07:00 |
|
oobabooga
|
3a9d90c3a1
|
Download models with 4 threads by default
|
2023-10-10 13:52:10 -07:00 |
|
快乐的我531
|
4e56ad55e1
|
Let model downloader download *.tiktoken as well (#4121)
|
2023-09-28 18:03:18 -03:00 |
|
kalomaze
|
7c9664ed35
|
Allow full model URL to be used for download (#3919)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-09-16 10:06:13 -03:00 |
|
oobabooga
|
df52dab67b
|
Lint
|
2023-09-11 07:57:38 -07:00 |
|
oobabooga
|
ed86878f02
|
Remove GGML support
|
2023-09-11 07:44:00 -07:00 |
|
missionfloyd
|
787219267c
|
Allow downloading single file from UI (#3737)
|
2023-08-29 23:32:36 -03:00 |
|
Alberto Ferrer
|
f63dd83631
|
Update download-model.py (Allow single file download) (#3732)
|
2023-08-29 22:57:58 -03:00 |
|
oobabooga
|
7f5370a272
|
Minor fixes/cosmetics
|
2023-08-26 22:11:07 -07:00 |
|
jllllll
|
4a999e3bcd
|
Use separate llama-cpp-python packages for GGML support
|
2023-08-26 10:40:08 -05:00 |
|
oobabooga
|
83640d6f43
|
Replace ggml occurences with gguf
|
2023-08-26 01:06:59 -07:00 |
|
Thomas De Bonnet
|
0dfd1a8b7d
|
Improve readability of download-model.py (#3497)
|
2023-08-20 20:13:13 -03:00 |
|
oobabooga
|
4b3384e353
|
Handle unfinished lists during markdown streaming
|
2023-08-03 17:15:18 -07:00 |
|
oobabooga
|
13449aa44d
|
Decrease download timeout
|
2023-07-15 22:30:08 -07:00 |
|
oobabooga
|
e202190c4f
|
lint
|
2023-07-12 11:33:25 -07:00 |
|
Ahmad Fahadh Ilyas
|
8db7e857b1
|
Add token authorization for downloading model (#3067)
|
2023-07-11 18:48:08 -03:00 |
|
FartyPants
|
61102899cd
|
google flan T5 download fix (#3080)
|
2023-07-11 18:46:59 -03:00 |
|
tianchen zhong
|
c7058afb40
|
Add new possible bin file name regex (#3070)
|
2023-07-09 17:22:56 -03:00 |
|
jeckyhl
|
88a747b5b9
|
fix: Error when downloading model from UI (#3014)
|
2023-07-05 11:27:29 -03:00 |
|
AN Long
|
be4582be40
|
Support specify retry times in download-model.py (#2908)
|
2023-07-04 22:26:30 -03:00 |
|
Roman
|
38897fbd8a
|
fix: added model parameter check (#2829)
|
2023-06-24 10:09:34 -03:00 |
|
Gaurav Bhagchandani
|
89fb6f9236
|
Fixed the ZeroDivisionError when downloading a model (#2797)
|
2023-06-21 12:31:50 -03:00 |
|