..
grammar
Handle both int and str types in grammar char processing
2025-07-23 11:52:51 -07:00
callbacks.py
fix: replace bare except clauses with except Exception ( #7400 )
2026-03-04 18:06:17 -03:00
chat.py
fix: replace bare except clauses with except Exception ( #7400 )
2026-03-04 18:06:17 -03:00
evaluate.py
fix: replace bare except clauses with except Exception ( #7400 )
2026-03-04 18:06:17 -03:00
exllamav2.py
Lint
2025-08-12 13:37:37 -07:00
exllamav2_hf.py
Make exllamav3_hf and exllamav2_hf functional again
2025-09-17 12:29:22 -07:00
exllamav3.py
ExLlamaV3: Attach AdaptiveP, fix speculative decoding parameter, add seed
2026-03-04 10:51:15 -08:00
exllamav3_hf.py
ExLlamav3_HF: Optimize prefill and fix CFG cache initialization
2026-03-04 11:09:58 -08:00
extensions.py
Better log message when extension requirements are not found
2025-07-06 17:44:41 -07:00
html_generator.py
Improve process_markdown_content ( #7403 )
2026-03-04 17:26:13 -03:00
image_models.py
Image: Quantize the text encoder for lower VRAM
2025-12-05 13:08:46 -08:00
image_utils.py
Image generation: Safer image uploading
2025-12-03 16:07:51 -08:00
llama_cpp_server.py
llama.cpp: allow ctx_size=0 for auto context via --fit
2026-03-04 19:33:20 -08:00
loaders.py
llama.cpp: Reorganize speculative decoding UI and use recommended ngram-mod defaults
2026-03-04 12:05:08 -08:00
logging_colors.py
Lint
2023-12-19 21:36:57 -08:00
logits.py
fix: replace bare except clauses with except Exception ( #7400 )
2026-03-04 18:06:17 -03:00
LoRA.py
Refactor the transformers loader ( #6859 )
2025-04-20 13:33:47 -03:00
metadata_gguf.py
fix: replace bare except clauses with except Exception ( #7400 )
2026-03-04 18:06:17 -03:00
models.py
llama.cpp: allow ctx_size=0 for auto context via --fit
2026-03-04 19:33:20 -08:00
models_settings.py
llama.cpp: allow ctx_size=0 for auto context via --fit
2026-03-04 19:33:20 -08:00
presets.py
Add adaptive-p sampler and n-gram speculative decoding support
2026-03-04 09:41:29 -08:00
prompts.py
fix: replace bare except clauses with except Exception ( #7400 )
2026-03-04 18:06:17 -03:00
sampler_hijack.py
Add adaptive-p sampler and n-gram speculative decoding support
2026-03-04 09:41:29 -08:00
sane_markdown_lists.py
Disable uncommonly used indented codeblocks ( #7401 )
2026-03-04 17:51:00 -03:00
shared.py
Update TensorRT-LLM to v1.1.0
2026-03-05 09:32:28 -03:00
tensorrt_llm.py
Update TensorRT-LLM to v1.1.0
2026-03-05 09:32:28 -03:00
text_generation.py
Remove obsolete DeepSpeed inference code (2023 relic)
2026-03-04 17:20:34 -08:00
torch_utils.py
Remove obsolete DeepSpeed inference code (2023 relic)
2026-03-04 17:20:34 -08:00
training.py
Overhaul LoRA training tab
2026-03-05 10:52:59 -03:00
transformers_loader.py
Remove obsolete DeepSpeed inference code (2023 relic)
2026-03-04 17:20:34 -08:00
ui.py
Add adaptive-p sampler and n-gram speculative decoding support
2026-03-04 09:41:29 -08:00
ui_chat.py
Revert "UI: Remove unnecessary server round-trips from button click chains"
2026-03-04 18:41:30 -08:00
ui_default.py
Fix the UI failing to launch if the Notebook prompt is too long
2025-08-30 08:42:26 -07:00
ui_file_saving.py
feat: Add a dropdown to save/load user personas ( #7367 )
2026-01-14 20:35:08 -03:00
ui_image_generation.py
Revert "Clear the torch cache between sequential image generations"
2025-12-07 12:23:19 -08:00
ui_model_menu.py
llama.cpp: allow ctx_size=0 for auto context via --fit
2026-03-04 19:33:20 -08:00
ui_notebook.py
Fix the UI failing to launch if the Notebook prompt is too long
2025-08-30 08:42:26 -07:00
ui_parameters.py
llama.cpp: allow ctx_size=0 for auto context via --fit
2026-03-04 19:33:20 -08:00
ui_session.py
Rename a button in the Session tab for clarity
2025-07-07 11:28:47 -07:00
utils.py
Fix blank prompt dropdown in Notebook/Default tabs on first startup
2026-03-04 19:07:55 -08:00
web_search.py
Fix web search (attempt)
2025-08-14 12:05:14 -07:00