Commit graph

5096 commits

Author SHA1 Message Date
oobabooga 41618cf799 Merge branch 'dev' into image_generation 2025-12-01 09:35:22 -08:00
oobabooga 24fd963c38 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-12-01 08:06:08 -08:00
oobabooga e24ba92ef2 UI: Optimize typing in all textareas 2025-12-01 08:05:21 -08:00
aidevtime 661e42d2b7
fix(deps): upgrade coqui-tts to >=0.27.0 for transformers 4.55 compatibility (#7329) 2025-11-28 22:59:36 -03:00
oobabooga 5327bc9397
Update modules/shared.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-28 22:48:05 -03:00
oobabooga 78b315344a Update exllamav3 2025-11-28 06:45:05 -08:00
oobabooga 3cad0cd4c1 Update llama.cpp 2025-11-28 03:52:37 -08:00
oobabooga cecb172d2c Add the code for 4-bit quantization 2025-11-27 18:29:32 -08:00
oobabooga 742db85de0 Hardcode 8-bit quantization for now 2025-11-27 18:23:36 -08:00
oobabooga 822e74ac97 Lint 2025-11-27 18:15:15 -08:00
oobabooga 30d1f502aa More informative download message 2025-11-27 16:37:03 -08:00
oobabooga 74eedf6050 Remove the CFG slider 2025-11-27 16:28:40 -08:00
oobabooga 9e33c6bfb7 Add missing files 2025-11-27 15:56:58 -08:00
oobabooga 666816a773 Small fixes 2025-11-27 15:48:53 -08:00
oobabooga 21f992e7f7 Organize the UI 2025-11-27 15:42:11 -08:00
oobabooga 148a5d1e44 Keep things more modular 2025-11-27 15:32:01 -08:00
oobabooga 0adda7a5c5 Lint 2025-11-27 14:39:21 -08:00
oobabooga aa074409cb Better events for the dimensions 2025-11-27 14:38:50 -08:00
oobabooga be799ba8eb Lint 2025-11-27 14:25:49 -08:00
oobabooga a873692234 Image generation now functional 2025-11-27 14:24:35 -08:00
oobabooga 2f11b3040d Add functions 2025-11-27 13:53:46 -08:00
oobabooga aa63c612de Progress on model loading 2025-11-27 13:46:54 -08:00
oobabooga 164c6fcdbf Add the UI structure 2025-11-27 13:44:07 -08:00
oobabooga 4ad2ad468e Add basic structure 2025-11-27 10:10:11 -08:00
GodEmperor785 400bb0694b
Add slider for --ubatch-size for llama.cpp loader, change defaults for better MoE performance (#7316) 2025-11-21 16:56:02 -03:00
oobabooga 8f0048663d More modular HTML generator 2025-11-21 07:09:16 -08:00
oobabooga b0baf7518b Remove macos x86-64 portable builds (macos-13 runner deprecated by GitHub) 2025-11-19 06:07:15 -08:00
oobabooga 0d4eff284c Add a --cpu-moe model for llama.cpp 2025-11-19 05:23:43 -08:00
oobabooga d6f39e1fef Add ROCm portable builds 2025-11-18 16:32:20 -08:00
oobabooga 327a234d23 Add ROCm requirements.txt files 2025-11-18 16:24:56 -08:00
oobabooga 4e4abd0841 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-11-18 14:07:05 -08:00
oobabooga c45f35ccc2 Remove the macos 13 wheels (deprecated by GitHub) 2025-11-18 14:06:42 -08:00
oobabooga d85b95bb15 Update llama.cpp 2025-11-18 14:06:04 -08:00
dependabot[bot] 4a36b7be5b
Bump triton-windows in /requirements/full (#7311) 2025-11-18 18:51:26 -03:00
dependabot[bot] 3d7e9856a2
Update peft requirement from ==0.17.* to ==0.18.* in /requirements/full (#7310) 2025-11-18 18:51:15 -03:00
oobabooga a26e28bdea Update exllamav3 to 0.0.15 2025-11-18 11:24:16 -08:00
oobabooga 6a3bf1de92 Update exllamav3 to 0.0.14 2025-11-09 19:43:53 -08:00
oobabooga e7534a90d8 Update llama.cpp 2025-11-05 18:46:01 -08:00
oobabooga 6be1bfcc87 Remove the CUDA 11.7 portable builds 2025-11-05 05:45:10 -08:00
oobabooga 92d9cd36a6 Update llama.cpp 2025-11-05 05:43:34 -08:00
oobabooga 67f9288891 Pin huggingface-hub to 0.36.0 (solves #7284 and #7289) 2025-11-02 14:01:00 -08:00
oobabooga 16f77b74c4 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-11-01 19:58:53 -07:00
oobabooga cd645f80f8 Update exllamav3 to 0.0.12 2025-11-01 19:58:18 -07:00
Trenten Miller 6871484398
fix: Rename 'evaluation_strategy' to 'eval_strategy' in training 2025-10-28 16:48:04 -03:00
oobabooga 338ae36f73 Add weights_only=True to torch.load in Training_PRO 2025-10-28 12:43:16 -07:00
dependabot[bot] c8cd840b24
Bump flash-linear-attention from 0.3.2 to 0.4.0 in /requirements/full (#7285)
Bumps [flash-linear-attention](https://github.com/fla-org/flash-linear-attention) from 0.3.2 to 0.4.0.
- [Release notes](https://github.com/fla-org/flash-linear-attention/releases)
- [Commits](https://github.com/fla-org/flash-linear-attention/compare/v0.3.2...v0.4.0)

---
updated-dependencies:
- dependency-name: flash-linear-attention
  dependency-version: 0.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-28 10:07:03 -03:00
oobabooga f4c9e67155 Update llama.cpp 2025-10-23 08:19:32 -07:00
Immanuel 9a84a828fc
Fixed python requirements for apple devices with macos tahoe (#7273) 2025-10-22 14:59:27 -03:00
reksarka 138cc654c4
Make it possible to run a portable Web UI build via a symlink (#7277) 2025-10-22 14:55:17 -03:00
oobabooga 24fd2b4dec Update exllamav3 to 0.0.11 2025-10-21 07:26:38 -07:00