oobabooga
f010aa1612
Replace PyPDF2 with pymupdf for PDF text extraction
...
pymupdf produces cleaner text (e.g. no concatenated words in headers),
handles encrypted and malformed PDFs that PyPDF2 failed on, and
supports non-Latin scripts.
2026-03-04 06:43:37 -08:00
oobabooga
11dc6fdfce
Update the custom gradio wheels
2026-03-04 06:04:33 -08:00
oobabooga
7d42b6900e
Update the custom gradio wheels
2026-03-04 05:47:59 -08:00
oobabooga
c0bff831e3
Update custom gradio wheels
2026-03-03 17:21:18 -08:00
oobabooga
e9f22813e4
Replace gradio with my gradio 4.37.2 fork
2026-03-03 16:51:27 -08:00
dependabot[bot]
3519890c8e
Bump flask-cloudflared from 0.0.14 to 0.0.15 in /requirements/full ( #7380 )
2026-03-03 21:41:51 -03:00
dependabot[bot]
9c604628a0
Bump flask-cloudflared from 0.0.14 to 0.0.15 in /requirements/portable ( #7382 )
2026-03-03 21:41:46 -03:00
oobabooga
fbd2acfa19
Remove triton-windows from non-CUDA requirements
2026-03-03 16:16:55 -08:00
oobabooga
5fd79b23d1
Add CUDA 13.1 portable builds
2026-03-03 15:36:41 -08:00
oobabooga
b8fcc8ea32
Update llama.cpp, remove noavx2 builds, add ROCm Windows portable builds
2026-03-03 15:27:19 -08:00
oobabooga
38d0eeefc0
Update dependencies: torch 2.9.1, transformers 5.2, exllamav3 0.0.22, accelerate 1.12, huggingface-hub 1.5
2026-03-03 12:01:02 -08:00
oobabooga
ddd74324fe
Update PyTorch to 2.9.1 and ROCm to 6.4
2026-03-03 11:38:52 -08:00
oobabooga
efc72d5c32
Update Python from 3.11 to 3.13
2026-03-03 11:03:26 -08:00
dependabot[bot]
cae1fef42d
Bump triton-windows in /requirements/full ( #7368 )
2026-01-14 21:30:59 -03:00
oobabooga
d79cdc614c
Update llama.cpp
2026-01-08 11:24:15 -08:00
oobabooga
332fd40653
Update llama.cpp
2026-01-07 19:06:23 -08:00
dependabot[bot]
50a35b483c
Update bitsandbytes requirement in /requirements/full ( #7353 )
2026-01-06 15:27:23 -03:00
dependabot[bot]
45fbec0320
Update torchao requirement in /requirements/full ( #7356 )
2026-01-06 15:27:10 -03:00
oobabooga
b0968ed8b4
Update flash-linear-attention
2026-01-06 10:26:43 -08:00
oobabooga
bb3b7bc197
Update llama.cpp
2026-01-06 10:23:58 -08:00
oobabooga
09d88f91e8
Update llama.cpp
2025-12-19 21:00:13 -08:00
oobabooga
6e8fb0e7b1
Update llama.cpp
2025-12-14 13:32:14 -08:00
oobabooga
9fe40ff90f
Update exllamav3 to 0.0.18
2025-12-10 05:37:33 -08:00
oobabooga
8e762e04b4
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
2025-12-09 05:27:43 -08:00
oobabooga
aa16266c38
Update llama.cpp
2025-12-09 03:19:23 -08:00
dependabot[bot]
85269d7fbb
Update safetensors requirement in /requirements/full ( #7323 )
2025-12-08 17:58:27 -03:00
dependabot[bot]
c4ebab9b29
Bump triton-windows in /requirements/full ( #7346 )
2025-12-08 17:56:07 -03:00
oobabooga
502f59d39b
Update diffusers to 0.36
2025-12-08 05:08:54 -08:00
oobabooga
3b8369a679
Update llama.cpp
2025-12-07 11:18:36 -08:00
oobabooga
17bd8d10f0
Update exllamav3 to 0.0.17
2025-12-07 09:37:18 -08:00
oobabooga
194e4c285f
Update llama.cpp
2025-12-06 08:14:48 -08:00
oobabooga
c93d27add3
Update llama.cpp
2025-12-03 18:29:43 -08:00
oobabooga
9448bf1caa
Image generation: add torchao quantization (supports torch.compile)
2025-12-02 14:22:51 -08:00
oobabooga
6291e72129
Remove quanto for now (requires messy compilation)
2025-12-02 09:57:18 -08:00
oobabooga
b3666e140d
Add image generation support ( #7328 )
2025-12-02 14:55:38 -03:00
oobabooga
78b315344a
Update exllamav3
2025-11-28 06:45:05 -08:00
oobabooga
3cad0cd4c1
Update llama.cpp
2025-11-28 03:52:37 -08:00
oobabooga
327a234d23
Add ROCm requirements.txt files
2025-11-18 16:24:56 -08:00
oobabooga
4e4abd0841
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
2025-11-18 14:07:05 -08:00
oobabooga
c45f35ccc2
Remove the macos 13 wheels (deprecated by GitHub)
2025-11-18 14:06:42 -08:00
oobabooga
d85b95bb15
Update llama.cpp
2025-11-18 14:06:04 -08:00
dependabot[bot]
4a36b7be5b
Bump triton-windows in /requirements/full ( #7311 )
2025-11-18 18:51:26 -03:00
dependabot[bot]
3d7e9856a2
Update peft requirement from ==0.17.* to ==0.18.* in /requirements/full ( #7310 )
2025-11-18 18:51:15 -03:00
oobabooga
a26e28bdea
Update exllamav3 to 0.0.15
2025-11-18 11:24:16 -08:00
oobabooga
6a3bf1de92
Update exllamav3 to 0.0.14
2025-11-09 19:43:53 -08:00
oobabooga
e7534a90d8
Update llama.cpp
2025-11-05 18:46:01 -08:00
oobabooga
92d9cd36a6
Update llama.cpp
2025-11-05 05:43:34 -08:00
oobabooga
67f9288891
Pin huggingface-hub to 0.36.0 (solves #7284 and #7289 )
2025-11-02 14:01:00 -08:00
oobabooga
cd645f80f8
Update exllamav3 to 0.0.12
2025-11-01 19:58:18 -07:00
dependabot[bot]
c8cd840b24
Bump flash-linear-attention from 0.3.2 to 0.4.0 in /requirements/full ( #7285 )
...
Bumps [flash-linear-attention](https://github.com/fla-org/flash-linear-attention ) from 0.3.2 to 0.4.0.
- [Release notes](https://github.com/fla-org/flash-linear-attention/releases )
- [Commits](https://github.com/fla-org/flash-linear-attention/compare/v0.3.2...v0.4.0 )
---
updated-dependencies:
- dependency-name: flash-linear-attention
dependency-version: 0.4.0
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-28 10:07:03 -03:00