oobabooga
|
373baa5c9c
|
UI: Minor image gallery improvements
|
2025-12-03 14:45:02 -08:00 |
|
oobabooga
|
906dc54969
|
Load --image-model before --model
|
2025-12-03 12:15:38 -08:00 |
|
oobabooga
|
4468c49439
|
Add semaphore to image generation API endpoint
|
2025-12-03 12:02:47 -08:00 |
|
oobabooga
|
5ad174fad2
|
docs: Add an image generation API example
|
2025-12-03 11:58:54 -08:00 |
|
oobabooga
|
5433ef3333
|
Add an API endpoint for generating images
|
2025-12-03 11:50:56 -08:00 |
|
oobabooga
|
9448bf1caa
|
Image generation: add torchao quantization (supports torch.compile)
|
2025-12-02 14:22:51 -08:00 |
|
oobabooga
|
97281ff831
|
UI: Fix an index error in the new image gallery
|
2025-12-02 11:20:52 -08:00 |
|
oobabooga
|
9d07d3a229
|
Make portable builds functional again after b3666e140d
|
2025-12-02 10:06:57 -08:00 |
|
oobabooga
|
6291e72129
|
Remove quanto for now (requires messy compilation)
|
2025-12-02 09:57:18 -08:00 |
|
oobabooga
|
b3666e140d
|
Add image generation support (#7328)
|
2025-12-02 14:55:38 -03:00 |
|
oobabooga
|
a83821e941
|
Revert "UI: Optimize typing in all textareas"
This reverts commit e24ba92ef2.
|
2025-12-01 10:34:23 -08:00 |
|
oobabooga
|
24fd963c38
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-12-01 08:06:08 -08:00 |
|
oobabooga
|
e24ba92ef2
|
UI: Optimize typing in all textareas
|
2025-12-01 08:05:21 -08:00 |
|
aidevtime
|
661e42d2b7
|
fix(deps): upgrade coqui-tts to >=0.27.0 for transformers 4.55 compatibility (#7329)
|
2025-11-28 22:59:36 -03:00 |
|
oobabooga
|
5327bc9397
|
Update modules/shared.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
2025-11-28 22:48:05 -03:00 |
|
oobabooga
|
78b315344a
|
Update exllamav3
|
2025-11-28 06:45:05 -08:00 |
|
oobabooga
|
3cad0cd4c1
|
Update llama.cpp
|
2025-11-28 03:52:37 -08:00 |
|
GodEmperor785
|
400bb0694b
|
Add slider for --ubatch-size for llama.cpp loader, change defaults for better MoE performance (#7316)
|
2025-11-21 16:56:02 -03:00 |
|
oobabooga
|
8f0048663d
|
More modular HTML generator
|
2025-11-21 07:09:16 -08:00 |
|
oobabooga
|
b0baf7518b
|
Remove macos x86-64 portable builds (macos-13 runner deprecated by GitHub)
|
2025-11-19 06:07:15 -08:00 |
|
oobabooga
|
0d4eff284c
|
Add a --cpu-moe model for llama.cpp
|
2025-11-19 05:23:43 -08:00 |
|
oobabooga
|
d6f39e1fef
|
Add ROCm portable builds
|
2025-11-18 16:32:20 -08:00 |
|
oobabooga
|
327a234d23
|
Add ROCm requirements.txt files
|
2025-11-18 16:24:56 -08:00 |
|
oobabooga
|
4e4abd0841
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-11-18 14:07:05 -08:00 |
|
oobabooga
|
c45f35ccc2
|
Remove the macos 13 wheels (deprecated by GitHub)
|
2025-11-18 14:06:42 -08:00 |
|
oobabooga
|
d85b95bb15
|
Update llama.cpp
|
2025-11-18 14:06:04 -08:00 |
|
dependabot[bot]
|
4a36b7be5b
|
Bump triton-windows in /requirements/full (#7311)
|
2025-11-18 18:51:26 -03:00 |
|
dependabot[bot]
|
3d7e9856a2
|
Update peft requirement from ==0.17.* to ==0.18.* in /requirements/full (#7310)
|
2025-11-18 18:51:15 -03:00 |
|
oobabooga
|
a26e28bdea
|
Update exllamav3 to 0.0.15
|
2025-11-18 11:24:16 -08:00 |
|
oobabooga
|
6a3bf1de92
|
Update exllamav3 to 0.0.14
|
2025-11-09 19:43:53 -08:00 |
|
oobabooga
|
e7534a90d8
|
Update llama.cpp
|
2025-11-05 18:46:01 -08:00 |
|
oobabooga
|
6be1bfcc87
|
Remove the CUDA 11.7 portable builds
|
2025-11-05 05:45:10 -08:00 |
|
oobabooga
|
92d9cd36a6
|
Update llama.cpp
|
2025-11-05 05:43:34 -08:00 |
|
oobabooga
|
67f9288891
|
Pin huggingface-hub to 0.36.0 (solves #7284 and #7289)
|
2025-11-02 14:01:00 -08:00 |
|
oobabooga
|
16f77b74c4
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-11-01 19:58:53 -07:00 |
|
oobabooga
|
cd645f80f8
|
Update exllamav3 to 0.0.12
|
2025-11-01 19:58:18 -07:00 |
|
Trenten Miller
|
6871484398
|
fix: Rename 'evaluation_strategy' to 'eval_strategy' in training
|
2025-10-28 16:48:04 -03:00 |
|
oobabooga
|
338ae36f73
|
Add weights_only=True to torch.load in Training_PRO
|
2025-10-28 12:43:16 -07:00 |
|
dependabot[bot]
|
c8cd840b24
|
Bump flash-linear-attention from 0.3.2 to 0.4.0 in /requirements/full (#7285)
Bumps [flash-linear-attention](https://github.com/fla-org/flash-linear-attention) from 0.3.2 to 0.4.0.
- [Release notes](https://github.com/fla-org/flash-linear-attention/releases)
- [Commits](https://github.com/fla-org/flash-linear-attention/compare/v0.3.2...v0.4.0)
---
updated-dependencies:
- dependency-name: flash-linear-attention
dependency-version: 0.4.0
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
|
2025-10-28 10:07:03 -03:00 |
|
oobabooga
|
f4c9e67155
|
Update llama.cpp
|
2025-10-23 08:19:32 -07:00 |
|
Immanuel
|
9a84a828fc
|
Fixed python requirements for apple devices with macos tahoe (#7273)
|
2025-10-22 14:59:27 -03:00 |
|
reksarka
|
138cc654c4
|
Make it possible to run a portable Web UI build via a symlink (#7277)
|
2025-10-22 14:55:17 -03:00 |
|
oobabooga
|
24fd2b4dec
|
Update exllamav3 to 0.0.11
|
2025-10-21 07:26:38 -07:00 |
|
oobabooga
|
be81f050a7
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-10-20 19:43:36 -07:00 |
|
oobabooga
|
9476123ee6
|
Update llama.cpp
|
2025-10-20 19:43:26 -07:00 |
|
dependabot[bot]
|
0d85744205
|
Bump triton-windows in /requirements/full (#7274)
|
2025-10-20 20:36:55 -03:00 |
|
oobabooga
|
a156ebbf76
|
Lint
|
2025-10-15 13:15:01 -07:00 |
|
oobabooga
|
c871d9cdbd
|
Revert "Same as 7f06aec3a1 but for exllamav3_hf"
This reverts commit deb37b821b.
|
2025-10-15 13:05:41 -07:00 |
|
oobabooga
|
163d863443
|
Update llama.cpp
|
2025-10-15 11:23:10 -07:00 |
|
oobabooga
|
c93d567f97
|
Update exllamav3 to 0.0.10
|
2025-10-15 06:41:09 -07:00 |
|