oobabooga
|
e20b2d38ff
|
docs: Add VRAM measurements for Z-Image-Turbo
|
2025-12-05 14:12:08 -08:00 |
|
oobabooga
|
6ca99910ba
|
Image: Quantize the text encoder for lower VRAM
|
2025-12-05 13:08:46 -08:00 |
|
oobabooga
|
11937de517
|
Use flash attention for image generation by default
|
2025-12-05 12:13:24 -08:00 |
|
oobabooga
|
eba8a59466
|
docs: Improve the image generation tutorial
|
2025-12-05 12:10:41 -08:00 |
|
oobabooga
|
5848c7884d
|
Increase the height of the image output gallery
|
2025-12-05 10:24:51 -08:00 |
|
oobabooga
|
c11c14590a
|
Image: Better LLM variation default prompt
|
2025-12-05 08:08:11 -08:00 |
|
oobabooga
|
0dd468245c
|
Image: Add back the gallery cache (for performance)
|
2025-12-05 07:11:38 -08:00 |
|
oobabooga
|
b63d57158d
|
Image: Add TGW as a prefix to output images
|
2025-12-05 05:59:54 -08:00 |
|
oobabooga
|
afa29b9554
|
Image: Several fixes
|
2025-12-05 05:58:57 -08:00 |
|
oobabooga
|
8eac99599a
|
Image: Better LLM variation default prompt
|
2025-12-04 19:58:06 -08:00 |
|
oobabooga
|
b4f06a50b0
|
fix: Pass bos_token and eos_token from metadata to jinja2
Fixes loading Seed-Instruct-36B
|
2025-12-04 19:11:31 -08:00 |
|
oobabooga
|
15c6e43597
|
Image: Add a revised_prompt field to API results for OpenAI compatibility
|
2025-12-04 17:41:09 -08:00 |
|
oobabooga
|
56f2a9512f
|
Revert "Image: Add the LLM-generated prompt to the API result"
This reverts commit c7ad28a4cd.
|
2025-12-04 17:34:27 -08:00 |
|
oobabooga
|
3ef428efaa
|
Image: Remove llm_variations from the API
|
2025-12-04 17:34:17 -08:00 |
|
oobabooga
|
c7ad28a4cd
|
Image: Add the LLM-generated prompt to the API result
|
2025-12-04 17:22:08 -08:00 |
|
oobabooga
|
b451bac082
|
Image: Improve a log message
|
2025-12-04 16:33:46 -08:00 |
|
oobabooga
|
47a0fcd614
|
Image: PNG metadata improvements
|
2025-12-04 16:25:48 -08:00 |
|
oobabooga
|
ac31a7c008
|
Image: Organize the UI
|
2025-12-04 15:45:04 -08:00 |
|
oobabooga
|
a90739f498
|
Image: Better LLM variation default prompt
|
2025-12-04 10:50:40 -08:00 |
|
oobabooga
|
ffef3c7b1d
|
Image: Make the LLM Variations prompt configurable
|
2025-12-04 10:44:35 -08:00 |
|
oobabooga
|
5763947c37
|
Image: Simplify the API code, add the llm_variations option
|
2025-12-04 10:23:00 -08:00 |
|
oobabooga
|
2793153717
|
Image: Add LLM-generated prompt variations
|
2025-12-04 08:10:24 -08:00 |
|
oobabooga
|
7fb9f19bd8
|
Progress bar style improvements
|
2025-12-04 06:20:45 -08:00 |
|
oobabooga
|
a838223d18
|
Image: Add a progress bar during generation
|
2025-12-04 05:49:57 -08:00 |
|
oobabooga
|
14dbc3488e
|
Image: Clear the torch cache after generation, not before
|
2025-12-04 05:32:58 -08:00 |
|
oobabooga
|
235b94f097
|
Image: Add placeholder file for user_data/image_models
|
2025-12-03 18:43:30 -08:00 |
|
oobabooga
|
c357eed4c7
|
Image: Remove the flash_attention_3 option (no idea how to get it working)
|
2025-12-03 18:40:34 -08:00 |
|
oobabooga
|
c93d27add3
|
Update llama.cpp
|
2025-12-03 18:29:43 -08:00 |
|
oobabooga
|
fbca54957e
|
Image generation: Yield partial results for batch count > 1
|
2025-12-03 16:13:07 -08:00 |
|
oobabooga
|
49c60882bf
|
Image generation: Safer image uploading
|
2025-12-03 16:07:51 -08:00 |
|
oobabooga
|
59285d501d
|
Image generation: Small UI improvements
|
2025-12-03 16:03:31 -08:00 |
|
oobabooga
|
373baa5c9c
|
UI: Minor image gallery improvements
|
2025-12-03 14:45:02 -08:00 |
|
oobabooga
|
906dc54969
|
Load --image-model before --model
|
2025-12-03 12:15:38 -08:00 |
|
oobabooga
|
4468c49439
|
Add semaphore to image generation API endpoint
|
2025-12-03 12:02:47 -08:00 |
|
oobabooga
|
5ad174fad2
|
docs: Add an image generation API example
|
2025-12-03 11:58:54 -08:00 |
|
oobabooga
|
5433ef3333
|
Add an API endpoint for generating images
|
2025-12-03 11:50:56 -08:00 |
|
oobabooga
|
9448bf1caa
|
Image generation: add torchao quantization (supports torch.compile)
|
2025-12-02 14:22:51 -08:00 |
|
oobabooga
|
97281ff831
|
UI: Fix an index error in the new image gallery
|
2025-12-02 11:20:52 -08:00 |
|
oobabooga
|
9d07d3a229
|
Make portable builds functional again after b3666e140d
|
2025-12-02 10:06:57 -08:00 |
|
oobabooga
|
6291e72129
|
Remove quanto for now (requires messy compilation)
|
2025-12-02 09:57:18 -08:00 |
|
oobabooga
|
b3666e140d
|
Add image generation support (#7328)
|
2025-12-02 14:55:38 -03:00 |
|
oobabooga
|
a83821e941
|
Revert "UI: Optimize typing in all textareas"
This reverts commit e24ba92ef2.
|
2025-12-01 10:34:23 -08:00 |
|
oobabooga
|
24fd963c38
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-12-01 08:06:08 -08:00 |
|
oobabooga
|
e24ba92ef2
|
UI: Optimize typing in all textareas
|
2025-12-01 08:05:21 -08:00 |
|
aidevtime
|
661e42d2b7
|
fix(deps): upgrade coqui-tts to >=0.27.0 for transformers 4.55 compatibility (#7329)
|
2025-11-28 22:59:36 -03:00 |
|
oobabooga
|
5327bc9397
|
Update modules/shared.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
2025-11-28 22:48:05 -03:00 |
|
oobabooga
|
78b315344a
|
Update exllamav3
|
2025-11-28 06:45:05 -08:00 |
|
oobabooga
|
3cad0cd4c1
|
Update llama.cpp
|
2025-11-28 03:52:37 -08:00 |
|
GodEmperor785
|
400bb0694b
|
Add slider for --ubatch-size for llama.cpp loader, change defaults for better MoE performance (#7316)
|
2025-11-21 16:56:02 -03:00 |
|
oobabooga
|
8f0048663d
|
More modular HTML generator
|
2025-11-21 07:09:16 -08:00 |
|