text-generation-webui/modules
2023-11-08 17:54:10 -08:00
..
AutoGPTQ_loader.py Fix is_ccl_available & is_xpu_available imports 2023-10-26 20:27:04 -07:00
block_requests.py
callbacks.py Intel Gpu support initialization (#4340) 2023-10-26 23:39:51 -03:00
chat.py Separate context and system message in instruction formats (#4499) 2023-11-07 20:02:58 -03:00
ctransformers_model.py
deepspeed_parameters.py
evaluate.py
exllama.py
exllama_hf.py
exllamav2.py Add cache_8bit option 2023-11-02 11:23:04 -07:00
exllamav2_hf.py Add cache_8bit option 2023-11-02 11:23:04 -07:00
extensions.py
github.py
GPTQ_loader.py make torch.load a bit safer (#4448) 2023-11-02 14:07:08 -03:00
grammar.py
html_generator.py
llama_attn_hijack.py
llamacpp_hf.py Disable logits_all in llamacpp_HF (makes processing 3x faster) 2023-11-07 14:35:48 -08:00
llamacpp_model.py Add types to the encode/decode/token-count endpoints 2023-11-07 19:32:14 -08:00
loaders.py Disable logits_all in llamacpp_HF (makes processing 3x faster) 2023-11-07 14:35:48 -08:00
logging_colors.py
logits.py Intel Gpu support initialization (#4340) 2023-10-26 23:39:51 -03:00
LoRA.py Intel Gpu support initialization (#4340) 2023-10-26 23:39:51 -03:00
metadata_gguf.py
models.py Add /v1/internal/model/load endpoint (tentative) 2023-11-07 20:58:06 -08:00
models_settings.py
monkey_patch_gptq_lora.py
one_click_installer_check.py
presets.py Make OpenAI API the default API (#4430) 2023-11-06 02:38:29 -03:00
prompts.py
relative_imports.py
RoPE.py
RWKV.py Intel Gpu support initialization (#4340) 2023-10-26 23:39:51 -03:00
sampler_hijack.py Add temperature_last parameter (#4472) 2023-11-04 13:09:07 -03:00
shared.py Separate context and system message in instruction formats (#4499) 2023-11-07 20:02:58 -03:00
text_generation.py Add types to the encode/decode/token-count endpoints 2023-11-07 19:32:14 -08:00
training.py make torch.load a bit safer (#4448) 2023-11-02 14:07:08 -03:00
ui.py Separate context and system message in instruction formats (#4499) 2023-11-07 20:02:58 -03:00
ui_chat.py Document the new "Custom system message" field 2023-11-08 17:54:10 -08:00
ui_default.py
ui_file_saving.py
ui_model_menu.py Disable logits_all in llamacpp_HF (makes processing 3x faster) 2023-11-07 14:35:48 -08:00
ui_notebook.py
ui_parameters.py Add temperature_last parameter (#4472) 2023-11-04 13:09:07 -03:00
ui_session.py
utils.py Refactor the /v1/models endpoint 2023-11-07 19:59:27 -08:00