dependabot[bot]
|
1cb618201c
|
Update bitsandbytes requirement in /requirements/full
Updates the requirements on [bitsandbytes](https://github.com/bitsandbytes-foundation/bitsandbytes) to permit the latest version.
- [Release notes](https://github.com/bitsandbytes-foundation/bitsandbytes/releases)
- [Changelog](https://github.com/bitsandbytes-foundation/bitsandbytes/blob/main/CHANGELOG.md)
- [Commits](https://github.com/bitsandbytes-foundation/bitsandbytes/compare/0.48.0...0.49.0)
---
updated-dependencies:
- dependency-name: bitsandbytes
dependency-version: 0.49.0
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2025-12-15 20:26:01 +00:00 |
|
oobabooga
|
6e8fb0e7b1
|
Update llama.cpp
|
2025-12-14 13:32:14 -08:00 |
|
oobabooga
|
9fe40ff90f
|
Update exllamav3 to 0.0.18
|
2025-12-10 05:37:33 -08:00 |
|
oobabooga
|
8e762e04b4
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-12-09 05:27:43 -08:00 |
|
oobabooga
|
aa16266c38
|
Update llama.cpp
|
2025-12-09 03:19:23 -08:00 |
|
dependabot[bot]
|
85269d7fbb
|
Update safetensors requirement in /requirements/full (#7323)
|
2025-12-08 17:58:27 -03:00 |
|
dependabot[bot]
|
c4ebab9b29
|
Bump triton-windows in /requirements/full (#7346)
|
2025-12-08 17:56:07 -03:00 |
|
oobabooga
|
502f59d39b
|
Update diffusers to 0.36
|
2025-12-08 05:08:54 -08:00 |
|
oobabooga
|
e7c8b51fec
|
Revert "Use flash_attention_2 by default for Transformers models"
This reverts commit 85f2df92e9.
|
2025-12-07 18:48:41 -08:00 |
|
oobabooga
|
b758059e95
|
Revert "Clear the torch cache between sequential image generations"
This reverts commit 1ec9f708e5.
|
2025-12-07 12:23:19 -08:00 |
|
oobabooga
|
1ec9f708e5
|
Clear the torch cache between sequential image generations
|
2025-12-07 11:49:22 -08:00 |
|
oobabooga
|
3b8369a679
|
Update llama.cpp
|
2025-12-07 11:18:36 -08:00 |
|
oobabooga
|
058e78411d
|
docs: Small changes
|
2025-12-07 10:16:08 -08:00 |
|
oobabooga
|
17bd8d10f0
|
Update exllamav3 to 0.0.17
|
2025-12-07 09:37:18 -08:00 |
|
oobabooga
|
85f2df92e9
|
Use flash_attention_2 by default for Transformers models
|
2025-12-07 06:56:58 -08:00 |
|
oobabooga
|
1762312fb4
|
Use random instead of np.random for image seeds (makes it work on Windows)
|
2025-12-06 20:10:32 -08:00 |
|
oobabooga
|
160a25165a
|
docs: Small change
|
2025-12-06 08:41:12 -08:00 |
|
oobabooga
|
f93cc4b5c3
|
Add an API example to the image generation tutorial
|
2025-12-06 08:33:06 -08:00 |
|
oobabooga
|
c026dbaf64
|
Fix API requests always returning the same 'created' time
|
2025-12-06 08:23:21 -08:00 |
|
oobabooga
|
194e4c285f
|
Update llama.cpp
|
2025-12-06 08:14:48 -08:00 |
|
oobabooga
|
1c36559e2b
|
Add a News section to the README
|
2025-12-06 07:05:00 -08:00 |
|
oobabooga
|
02518a96a9
|
Lint
|
2025-12-06 06:55:06 -08:00 |
|
oobabooga
|
0100ad1bd7
|
Add user_data/image_outputs to the Gradio allowed paths
|
2025-12-06 06:39:30 -08:00 |
|
oobabooga
|
6411142111
|
docs: Small changes
|
2025-12-06 06:36:16 -08:00 |
|
oobabooga
|
455dc06db0
|
Serve the original PNG images in the UI instead of webp
|
2025-12-06 05:43:00 -08:00 |
|
oobabooga
|
1a9ed1fe98
|
Fix the height of the image output gallery
|
2025-12-06 05:21:26 -08:00 |
|
oobabooga
|
17b12567d8
|
docs: Small changes
|
2025-12-05 14:15:15 -08:00 |
|
oobabooga
|
e20b2d38ff
|
docs: Add VRAM measurements for Z-Image-Turbo
|
2025-12-05 14:12:08 -08:00 |
|
oobabooga
|
6ca99910ba
|
Image: Quantize the text encoder for lower VRAM
|
2025-12-05 13:08:46 -08:00 |
|
oobabooga
|
11937de517
|
Use flash attention for image generation by default
|
2025-12-05 12:13:24 -08:00 |
|
oobabooga
|
eba8a59466
|
docs: Improve the image generation tutorial
|
2025-12-05 12:10:41 -08:00 |
|
oobabooga
|
5848c7884d
|
Increase the height of the image output gallery
|
2025-12-05 10:24:51 -08:00 |
|
oobabooga
|
c11c14590a
|
Image: Better LLM variation default prompt
|
2025-12-05 08:08:11 -08:00 |
|
oobabooga
|
0dd468245c
|
Image: Add back the gallery cache (for performance)
|
2025-12-05 07:11:38 -08:00 |
|
oobabooga
|
b63d57158d
|
Image: Add TGW as a prefix to output images
|
2025-12-05 05:59:54 -08:00 |
|
oobabooga
|
afa29b9554
|
Image: Several fixes
|
2025-12-05 05:58:57 -08:00 |
|
oobabooga
|
8eac99599a
|
Image: Better LLM variation default prompt
|
2025-12-04 19:58:06 -08:00 |
|
oobabooga
|
b4f06a50b0
|
fix: Pass bos_token and eos_token from metadata to jinja2
Fixes loading Seed-Instruct-36B
|
2025-12-04 19:11:31 -08:00 |
|
oobabooga
|
15c6e43597
|
Image: Add a revised_prompt field to API results for OpenAI compatibility
|
2025-12-04 17:41:09 -08:00 |
|
oobabooga
|
56f2a9512f
|
Revert "Image: Add the LLM-generated prompt to the API result"
This reverts commit c7ad28a4cd.
|
2025-12-04 17:34:27 -08:00 |
|
oobabooga
|
3ef428efaa
|
Image: Remove llm_variations from the API
|
2025-12-04 17:34:17 -08:00 |
|
oobabooga
|
c7ad28a4cd
|
Image: Add the LLM-generated prompt to the API result
|
2025-12-04 17:22:08 -08:00 |
|
oobabooga
|
b451bac082
|
Image: Improve a log message
|
2025-12-04 16:33:46 -08:00 |
|
oobabooga
|
47a0fcd614
|
Image: PNG metadata improvements
|
2025-12-04 16:25:48 -08:00 |
|
oobabooga
|
ac31a7c008
|
Image: Organize the UI
|
2025-12-04 15:45:04 -08:00 |
|
oobabooga
|
a90739f498
|
Image: Better LLM variation default prompt
|
2025-12-04 10:50:40 -08:00 |
|
oobabooga
|
ffef3c7b1d
|
Image: Make the LLM Variations prompt configurable
|
2025-12-04 10:44:35 -08:00 |
|
oobabooga
|
5763947c37
|
Image: Simplify the API code, add the llm_variations option
|
2025-12-04 10:23:00 -08:00 |
|
oobabooga
|
2793153717
|
Image: Add LLM-generated prompt variations
|
2025-12-04 08:10:24 -08:00 |
|
oobabooga
|
7fb9f19bd8
|
Progress bar style improvements
|
2025-12-04 06:20:45 -08:00 |
|