Commit graph

101 commits

Author SHA1 Message Date
oobabooga 5b06284a8a UI: Keep ExLlamav3_HF selected if already selected for EXL3 models 2025-08-19 06:23:21 -07:00
oobabooga 38c0b4a1ad Default ctx-size to 8192 when not found in the metadata 2025-08-11 07:39:53 -07:00
oobabooga fa9be444fa Use ExLlamav3 instead of ExLlamav3_HF by default for EXL3 models 2025-08-09 07:26:59 -07:00
oobabooga 6e9de75727 Support loading chat templates from chat_template.json files 2025-08-08 19:35:09 -07:00
oobabooga b391ac8eb1 Fix getting the ctx-size for EXL3/EXL2/Transformers models 2025-08-08 18:11:45 -07:00
oobabooga bfbbfc2361 Ignore add_generation_prompt in GPT-OSS 2025-08-05 17:33:01 -07:00
oobabooga 701048cf33 Try to avoid breaking jinja2 parsing for older models 2025-08-05 15:51:24 -07:00
oobabooga 3039aeffeb Fix parsing the gpt-oss-20b template 2025-08-05 11:35:17 -07:00
oobabooga 5989043537 Transformers: Support standalone .jinja chat templates (for GPT-OSS) 2025-08-05 11:22:18 -07:00
oobabooga 6c2bdda0f0 Transformers loader: replace use_flash_attention_2/use_eager_attention with a unified attn_implementation
Closes #7107
2025-07-09 18:39:37 -07:00
oobabooga 02f604479d Remove the pre-jinja2 custom stopping string handling (closes #7094) 2025-06-21 14:03:35 -07:00
oobabooga a1b606a6ac Fix obtaining the maximum number of GPU layers for DeepSeek-R1-0528-GGUF 2025-06-19 12:30:57 -07:00
oobabooga 197b327374 Minor log message change 2025-06-18 13:36:54 -07:00
oobabooga 8e9c0287aa UI: Fix edge case where gpu-layers slider maximum is incorrectly limited 2025-06-14 10:12:11 -07:00
oobabooga f337767f36 Add error handling for non-llama.cpp models in portable mode 2025-06-12 22:17:39 -07:00
Miriam 1443612e72
check .attention.head_count if .attention.head_count_kv doesn't exist (#7048) 2025-06-09 23:22:01 -03:00
oobabooga bae1aa34aa Fix loading Llama-3_3-Nemotron-Super-49B-v1 and similar models (closes #7012) 2025-05-25 17:19:26 -07:00
oobabooga 5d00574a56 Minor UI fixes 2025-05-20 16:20:49 -07:00
oobabooga 9ec46b8c44 Remove the HQQ loader (HQQ models can be loaded through Transformers) 2025-05-19 09:23:24 -07:00
oobabooga 61276f6a37 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-05-17 07:22:51 -07:00
oobabooga 4800d1d522 More robust VRAM calculation 2025-05-17 07:20:38 -07:00
mamei16 052c82b664
Fix KeyError: 'gpu_layers' when loading existing model settings (#6991) 2025-05-17 11:19:13 -03:00
oobabooga 0f77ff9670 UI: Use total VRAM (not free) for layers calculation when a model is loaded 2025-05-16 19:19:22 -07:00
oobabooga 71fa046c17 Minor changes after 1c549d176b 2025-05-16 17:38:08 -07:00
oobabooga d99fb0a22a Add backward compatibility with saved n_gpu_layers values 2025-05-16 17:29:18 -07:00
oobabooga 1c549d176b Fix GPU layers slider: honor saved settings and show true maximum 2025-05-16 17:26:13 -07:00
oobabooga 38c50087fe Prevent a crash on systems without an NVIDIA GPU 2025-05-16 11:55:30 -07:00
oobabooga 253e85a519 Only compute VRAM/GPU layers for llama.cpp models 2025-05-16 10:02:30 -07:00
oobabooga ee7b3028ac Always cache GGUF metadata calls 2025-05-16 09:12:36 -07:00
oobabooga 4925c307cf Auto-adjust GPU layers on context size and cache type changes + many fixes 2025-05-16 09:07:38 -07:00
oobabooga 5534d01da0
Estimate the VRAM for GGUF models + autoset gpu-layers (#6980) 2025-05-16 00:07:37 -03:00
oobabooga 3fa1a899ae UI: Fix gpu-layers being ignored (closes #6973) 2025-05-13 12:07:59 -07:00
oobabooga d9de14d1f7
Restructure the repository (#6904) 2025-04-26 08:56:54 -03:00
oobabooga d4b1e31c49 Use --ctx-size to specify the context size for all loaders
Old flags are still recognized as alternatives.
2025-04-25 16:59:03 -07:00
oobabooga 78aeabca89 Fix the transformers loader 2025-04-21 18:33:14 -07:00
oobabooga ae02ffc605
Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
oobabooga ae54d8faaa
New llama.cpp loader (#6846) 2025-04-18 09:59:37 -03:00
Googolplexed d78abe480b
Allow for model subfolder organization for GGUF files (#6686)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2025-04-18 02:53:59 -03:00
oobabooga 2c2d453c8c Revert "Use ExLlamaV2 (instead of the HF one) for EXL2 models for now"
This reverts commit 0ef1b8f8b4.
2025-04-17 21:31:32 -07:00
oobabooga 0ef1b8f8b4 Use ExLlamaV2 (instead of the HF one) for EXL2 models for now
It doesn't seem to have the "OverflowError" bug
2025-04-17 05:47:40 -07:00
oobabooga 682c78ea42 Add back detection of GPTQ models (closes #6841) 2025-04-11 21:00:42 -07:00
oobabooga 8b8d39ec4e
Add ExLlamaV3 support (#6832) 2025-04-09 00:07:08 -03:00
oobabooga a5855c345c
Set context lengths to at most 8192 by default (to prevent out of memory errors) (#6835) 2025-04-07 21:42:33 -03:00
oobabooga 7157257c3f
Remove the AutoGPTQ loader (#6641) 2025-01-08 19:28:56 -03:00
oobabooga e6181e834a Remove AutoAWQ as a standalone loader
(it works better through transformers)
2024-07-23 15:31:17 -07:00
oobabooga 907137a13d Automatically set bf16 & use_eager_attention for Gemma-2 2024-07-01 21:46:35 -07:00
mefich a85749dcbe
Update models_settings.py: add default alpha_value, add proper compress_pos_emb for newer GGUFs (#6111) 2024-06-26 22:17:56 -03:00
oobabooga 577a8cd3ee
Add TensorRT-LLM support (#5715) 2024-06-24 02:30:03 -03:00
Forkoz 1576227f16
Fix GGUFs with no BOS token present, mainly qwen2 models. (#6119)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-06-14 13:51:01 -03:00
oobabooga 2d196ed2fe Remove obsolete pre_layer parameter 2024-06-12 18:56:44 -07:00