oobabooga
aa634c77c0
Update llama.cpp
2026-03-06 21:00:36 -08:00
oobabooga
2beaa4b971
Update llama.cpp
2026-03-06 14:39:35 -08:00
oobabooga
3323dedd08
Update llama.cpp
2026-03-06 06:30:01 -08:00
oobabooga
36dbc4ccce
Remove unused colorama and psutil requirements
2026-03-06 06:28:35 -08:00
oobabooga
0e0e3ceb97
Update the custom gradio wheels
2026-03-06 05:46:08 -08:00
oobabooga
8be444a559
Update the custom gradio wheels
2026-03-05 21:05:15 -08:00
oobabooga
1729fb07b9
Update llama.cpp
2026-03-05 21:04:24 -08:00
oobabooga
2f08dce7b0
Remove ExLlamaV2 backend
...
- archived upstream: 7dc12af3a8
- replaced by ExLlamaV3, which has much better quantization accuracy
2026-03-05 14:02:13 -08:00
oobabooga
438e59498e
Update ExLlamaV3 to v0.0.23
2026-03-05 10:24:31 -08:00
oobabooga
6a08e79fa5
Update the custom gradio wheels
2026-03-04 18:22:50 -08:00
oobabooga
83cc207ef7
Update the custom gradio wheels
2026-03-04 14:31:18 -08:00
oobabooga
0ffb75de7c
Update Transformers to 5.3.0
2026-03-04 11:12:54 -08:00
oobabooga
22141679e3
Update the custom gradio wheels
2026-03-04 10:01:31 -08:00
oobabooga
f010aa1612
Replace PyPDF2 with pymupdf for PDF text extraction
...
pymupdf produces cleaner text (e.g. no concatenated words in headers),
handles encrypted and malformed PDFs that PyPDF2 failed on, and
supports non-Latin scripts.
2026-03-04 06:43:37 -08:00
oobabooga
11dc6fdfce
Update the custom gradio wheels
2026-03-04 06:04:33 -08:00
oobabooga
7d42b6900e
Update the custom gradio wheels
2026-03-04 05:47:59 -08:00
oobabooga
c0bff831e3
Update custom gradio wheels
2026-03-03 17:21:18 -08:00
oobabooga
e9f22813e4
Replace gradio with my gradio 4.37.2 fork
2026-03-03 16:51:27 -08:00
dependabot[bot]
3519890c8e
Bump flask-cloudflared from 0.0.14 to 0.0.15 in /requirements/full ( #7380 )
2026-03-03 21:41:51 -03:00
dependabot[bot]
9c604628a0
Bump flask-cloudflared from 0.0.14 to 0.0.15 in /requirements/portable ( #7382 )
2026-03-03 21:41:46 -03:00
oobabooga
fbd2acfa19
Remove triton-windows from non-CUDA requirements
2026-03-03 16:16:55 -08:00
oobabooga
5fd79b23d1
Add CUDA 13.1 portable builds
2026-03-03 15:36:41 -08:00
oobabooga
b8fcc8ea32
Update llama.cpp, remove noavx2 builds, add ROCm Windows portable builds
2026-03-03 15:27:19 -08:00
oobabooga
38d0eeefc0
Update dependencies: torch 2.9.1, transformers 5.2, exllamav3 0.0.22, accelerate 1.12, huggingface-hub 1.5
2026-03-03 12:01:02 -08:00
oobabooga
ddd74324fe
Update PyTorch to 2.9.1 and ROCm to 6.4
2026-03-03 11:38:52 -08:00
oobabooga
efc72d5c32
Update Python from 3.11 to 3.13
2026-03-03 11:03:26 -08:00
dependabot[bot]
cae1fef42d
Bump triton-windows in /requirements/full ( #7368 )
2026-01-14 21:30:59 -03:00
oobabooga
d79cdc614c
Update llama.cpp
2026-01-08 11:24:15 -08:00
oobabooga
332fd40653
Update llama.cpp
2026-01-07 19:06:23 -08:00
dependabot[bot]
50a35b483c
Update bitsandbytes requirement in /requirements/full ( #7353 )
2026-01-06 15:27:23 -03:00
dependabot[bot]
45fbec0320
Update torchao requirement in /requirements/full ( #7356 )
2026-01-06 15:27:10 -03:00
oobabooga
b0968ed8b4
Update flash-linear-attention
2026-01-06 10:26:43 -08:00
oobabooga
bb3b7bc197
Update llama.cpp
2026-01-06 10:23:58 -08:00
oobabooga
09d88f91e8
Update llama.cpp
2025-12-19 21:00:13 -08:00
oobabooga
6e8fb0e7b1
Update llama.cpp
2025-12-14 13:32:14 -08:00
oobabooga
9fe40ff90f
Update exllamav3 to 0.0.18
2025-12-10 05:37:33 -08:00
oobabooga
8e762e04b4
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
2025-12-09 05:27:43 -08:00
oobabooga
aa16266c38
Update llama.cpp
2025-12-09 03:19:23 -08:00
dependabot[bot]
85269d7fbb
Update safetensors requirement in /requirements/full ( #7323 )
2025-12-08 17:58:27 -03:00
dependabot[bot]
c4ebab9b29
Bump triton-windows in /requirements/full ( #7346 )
2025-12-08 17:56:07 -03:00
oobabooga
502f59d39b
Update diffusers to 0.36
2025-12-08 05:08:54 -08:00
oobabooga
3b8369a679
Update llama.cpp
2025-12-07 11:18:36 -08:00
oobabooga
17bd8d10f0
Update exllamav3 to 0.0.17
2025-12-07 09:37:18 -08:00
oobabooga
194e4c285f
Update llama.cpp
2025-12-06 08:14:48 -08:00
oobabooga
c93d27add3
Update llama.cpp
2025-12-03 18:29:43 -08:00
oobabooga
9448bf1caa
Image generation: add torchao quantization (supports torch.compile)
2025-12-02 14:22:51 -08:00
oobabooga
6291e72129
Remove quanto for now (requires messy compilation)
2025-12-02 09:57:18 -08:00
oobabooga
b3666e140d
Add image generation support ( #7328 )
2025-12-02 14:55:38 -03:00
oobabooga
78b315344a
Update exllamav3
2025-11-28 06:45:05 -08:00
oobabooga
3cad0cd4c1
Update llama.cpp
2025-11-28 03:52:37 -08:00