text-generation-webui/modules
2025-04-24 08:21:06 -07:00
..
grammar
block_requests.py Fix the Google Colab notebook 2025-01-16 05:21:18 -08:00
callbacks.py Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
chat.py Revert "UI: remove the streaming cursor" 2025-04-09 16:03:14 -07:00
deepspeed_parameters.py
evaluate.py Fix an import 2025-04-20 17:51:28 -07:00
exllamav2.py Lint 2025-04-22 08:03:25 -07:00
exllamav2_hf.py Lint 2025-04-22 08:04:02 -07:00
exllamav3_hf.py Fix ExLlamaV2_HF and ExLlamaV3_HF after ae02ffc605 2025-04-20 11:32:48 -07:00
extensions.py
github.py Fix several typos in the codebase (#6151) 2024-06-22 21:40:25 -03:00
gradio_hijack.py
html_generator.py UI: smoother chat streaming 2025-04-09 16:02:37 -07:00
llama_cpp_server.py llama.cpp: Add speculative decoding (#6891) 2025-04-23 20:10:16 -03:00
loaders.py llama.cpp: Add speculative decoding (#6891) 2025-04-23 20:10:16 -03:00
logging_colors.py
logits.py Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
LoRA.py Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
metadata_gguf.py
models.py Small change to the unload_model() function 2025-04-20 20:00:56 -07:00
models_settings.py Fix the transformers loader 2025-04-21 18:33:14 -07:00
one_click_installer_check.py
presets.py Add the top N-sigma sampler (#6796) 2025-03-14 16:45:11 -03:00
prompts.py
relative_imports.py
sampler_hijack.py Fix the exllamav2_HF and exllamav3_HF loaders 2025-04-21 18:32:23 -07:00
sane_markdown_lists.py Sane handling of markdown lists (#6626) 2025-01-04 15:41:31 -03:00
shared.py Handle CMD_FLAGS.txt in the main code (closes #6896) 2025-04-24 08:21:06 -07:00
tensorrt_llm.py Add TensorRT-LLM support (#5715) 2024-06-24 02:30:03 -03:00
text_generation.py llama.cpp: set the random seed manually 2025-04-20 19:08:44 -07:00
torch_utils.py Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
training.py llama.cpp: Add speculative decoding (#6891) 2025-04-23 20:10:16 -03:00
transformers_loader.py Fix ExLlamaV2_HF and ExLlamaV3_HF after ae02ffc605 2025-04-20 11:32:48 -07:00
ui.py llama.cpp: Add speculative decoding (#6891) 2025-04-23 20:10:16 -03:00
ui_chat.py Make 'instruct' the default chat mode 2025-04-24 07:08:49 -07:00
ui_default.py Lint 2024-12-17 20:13:32 -08:00
ui_file_saving.py Fix the "save preset" event 2024-10-01 11:20:48 -07:00
ui_model_menu.py llama.cpp: Add speculative decoding (#6891) 2025-04-23 20:10:16 -03:00
ui_notebook.py Lint 2024-12-17 20:13:32 -08:00
ui_parameters.py Set context lengths to at most 8192 by default (to prevent out of memory errors) (#6835) 2025-04-07 21:42:33 -03:00
ui_session.py Fix a bug after c6901aba9f 2025-04-18 06:51:28 -07:00
utils.py UI: show only part 00001 of multipart GGUF models in the model menu 2025-04-22 19:56:42 -07:00