Commit graph

1850 commits

Author SHA1 Message Date
oobabooga 077bbc6b10
Add web search support (#7023) 2025-05-28 04:27:28 -03:00
oobabooga 1b0e2d8750 UI: Add a token counter to the chat tab (counts input + history) 2025-05-27 22:36:24 -07:00
oobabooga f6ca0ee072 Fix regenerate sometimes not creating a new message version 2025-05-27 21:20:51 -07:00
Underscore 5028480eba
UI: Add footer buttons for editing messages (#7019)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2025-05-28 00:55:27 -03:00
Underscore 355b5f6c8b
UI: Add message version navigation (#6947)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2025-05-27 22:54:18 -03:00
Underscore 8531100109
Fix textbox text usage in methods (#7009) 2025-05-26 22:40:09 -03:00
oobabooga bae1aa34aa Fix loading Llama-3_3-Nemotron-Super-49B-v1 and similar models (closes #7012) 2025-05-25 17:19:26 -07:00
oobabooga 8620d6ffe7 Make it possible to upload multiple text files/pdfs at once 2025-05-20 21:34:07 -07:00
oobabooga cc8a4fdcb1 Minor improvement to attachments prompt format 2025-05-20 21:31:18 -07:00
oobabooga 409a48d6bd
Add attachments support (text files, PDF documents) (#7005) 2025-05-21 00:36:20 -03:00
oobabooga 5d00574a56 Minor UI fixes 2025-05-20 16:20:49 -07:00
oobabooga 616ea6966d
Store previous reply versions on regenerate (#7004) 2025-05-20 12:51:28 -03:00
Daniel Dengler c25a381540
Add a "Branch here" footer button to chat messages (#6967) 2025-05-20 11:07:40 -03:00
oobabooga 8e10f9894a
Add a metadata field to the chat history & add date/time to chat messages (#7003) 2025-05-20 10:48:46 -03:00
oobabooga 9ec46b8c44 Remove the HQQ loader (HQQ models can be loaded through Transformers) 2025-05-19 09:23:24 -07:00
oobabooga 126b3a768f Revert "Dynamic Chat Message UI Update Speed (#6952)" (for now)
This reverts commit 8137eb8ef4.
2025-05-18 12:38:36 -07:00
oobabooga 2faaf18f1f Add back the "Common values" to the ctx-size slider 2025-05-18 09:06:20 -07:00
oobabooga f1ec6c8662 Minor label changes 2025-05-18 09:04:51 -07:00
oobabooga 61276f6a37 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-05-17 07:22:51 -07:00
oobabooga 4800d1d522 More robust VRAM calculation 2025-05-17 07:20:38 -07:00
mamei16 052c82b664
Fix KeyError: 'gpu_layers' when loading existing model settings (#6991) 2025-05-17 11:19:13 -03:00
oobabooga 0f77ff9670 UI: Use total VRAM (not free) for layers calculation when a model is loaded 2025-05-16 19:19:22 -07:00
oobabooga c0e295dd1d Remove the 'None' option from the model menu 2025-05-16 17:53:20 -07:00
oobabooga e3bba510d4 UI: Only add a blank space to streaming messages in instruct mode 2025-05-16 17:49:17 -07:00
oobabooga 71fa046c17 Minor changes after 1c549d176b 2025-05-16 17:38:08 -07:00
oobabooga d99fb0a22a Add backward compatibility with saved n_gpu_layers values 2025-05-16 17:29:18 -07:00
oobabooga 1c549d176b Fix GPU layers slider: honor saved settings and show true maximum 2025-05-16 17:26:13 -07:00
oobabooga e4d3f4449d API: Fix a regression 2025-05-16 13:02:27 -07:00
oobabooga adb975a380 Prevent fractional gpu-layers in the UI 2025-05-16 12:52:43 -07:00
oobabooga fc483650b5 Set the maximum gpu_layers value automatically when the model is loaded with --model 2025-05-16 11:58:17 -07:00
oobabooga 38c50087fe Prevent a crash on systems without an NVIDIA GPU 2025-05-16 11:55:30 -07:00
oobabooga 253e85a519 Only compute VRAM/GPU layers for llama.cpp models 2025-05-16 10:02:30 -07:00
oobabooga 9ec9b1bf83 Auto-adjust GPU layers after model unload to utilize freed VRAM 2025-05-16 09:56:23 -07:00
oobabooga ee7b3028ac Always cache GGUF metadata calls 2025-05-16 09:12:36 -07:00
oobabooga 4925c307cf Auto-adjust GPU layers on context size and cache type changes + many fixes 2025-05-16 09:07:38 -07:00
oobabooga 93e1850a2c Only show the VRAM info for llama.cpp 2025-05-15 21:42:15 -07:00
oobabooga cbf4daf1c8 Hide the LoRA menu in portable mode 2025-05-15 21:21:54 -07:00
oobabooga fd61297933 Lint 2025-05-15 21:19:19 -07:00
oobabooga 5534d01da0
Estimate the VRAM for GGUF models + autoset gpu-layers (#6980) 2025-05-16 00:07:37 -03:00
oobabooga c4a715fd1e UI: Move the LoRA menu under "Other options" 2025-05-13 20:14:09 -07:00
oobabooga 035cd3e2a9 UI: Hide the extension install menu in portable builds 2025-05-13 20:09:22 -07:00
oobabooga 2826c60044 Use logger for "Output generated in ..." messages 2025-05-13 14:45:46 -07:00
oobabooga 3fa1a899ae UI: Fix gpu-layers being ignored (closes #6973) 2025-05-13 12:07:59 -07:00
oobabooga 62c774bf24 Revert "New attempt"
This reverts commit e7ac06c169.
2025-05-13 06:42:25 -07:00
oobabooga e7ac06c169 New attempt 2025-05-10 19:20:04 -07:00
oobabooga 47d4758509 Fix #6970 2025-05-10 17:46:00 -07:00
oobabooga 4920981b14 UI: Remove the typing cursor 2025-05-09 20:35:38 -07:00
oobabooga 8984e95c67 UI: More friendly message when no model is loaded 2025-05-09 07:21:05 -07:00
oobabooga 512bc2d0e0 UI: Update some labels 2025-05-08 23:43:55 -07:00
oobabooga f8ef6e09af UI: Make ctx-size a slider 2025-05-08 18:19:04 -07:00
oobabooga 9ea2a69210 llama.cpp: Add --no-webui to the llama-server command 2025-05-08 10:41:25 -07:00
oobabooga 1c7209a725 Save the chat history periodically during streaming 2025-05-08 09:46:43 -07:00
Jonas fa960496d5
Tools support for OpenAI compatible API (#6827) 2025-05-08 12:30:27 -03:00
oobabooga a2ab42d390 UI: Remove the exllamav2 info message 2025-05-08 08:00:38 -07:00
oobabooga 348d4860c2 UI: Create a "Main options" section in the Model tab 2025-05-08 07:58:59 -07:00
oobabooga d2bae7694c UI: Change the ctx-size description 2025-05-08 07:26:23 -07:00
oobabooga b28fa86db6 Default --gpu-layers to 256 2025-05-06 17:51:55 -07:00
Downtown-Case 5ef564a22e
Fix model config loading in shared.py for Python 3.13 (#6961) 2025-05-06 17:03:33 -03:00
oobabooga c4f36db0d8 llama.cpp: remove tfs (it doesn't get used) 2025-05-06 08:41:13 -07:00
oobabooga 05115e42ee Set top_n_sigma before temperature by default 2025-05-06 08:27:21 -07:00
oobabooga 1927afe894 Fix top_n_sigma not showing for llama.cpp 2025-05-06 08:18:49 -07:00
oobabooga d1c0154d66 llama.cpp: Add top_n_sigma, fix typical_p in sampler priority 2025-05-06 06:38:39 -07:00
mamei16 8137eb8ef4
Dynamic Chat Message UI Update Speed (#6952) 2025-05-05 18:05:23 -03:00
oobabooga 475e012ee8 UI: Improve the light theme colors 2025-05-05 06:16:29 -07:00
oobabooga b817bb33fd Minor fix after df7bb0db1f 2025-05-05 05:00:20 -07:00
oobabooga f3da45f65d ExLlamaV3_HF: Change max_chunk_size to 256 2025-05-04 20:37:15 -07:00
oobabooga df7bb0db1f Rename --n-gpu-layers to --gpu-layers 2025-05-04 20:03:55 -07:00
oobabooga d0211afb3c Save the chat history right after sending a message 2025-05-04 18:52:01 -07:00
oobabooga 690d693913 UI: Add padding to only show the last message/reply after sending a message
To avoid scrolling
2025-05-04 18:13:29 -07:00
oobabooga 7853fb1c8d
Optimize the Chat tab (#6948) 2025-05-04 18:58:37 -03:00
oobabooga b7a5c7db8d llama.cpp: Handle short arguments in --extra-flags 2025-05-04 07:14:42 -07:00
oobabooga 4c2e3b168b llama.cpp: Add a retry mechanism when getting the logits (sometimes it fails) 2025-05-03 06:51:20 -07:00
oobabooga ea60f14674 UI: Show the list of files if the user tries to download a GGUF repository 2025-05-03 06:06:50 -07:00
oobabooga b71ef50e9d UI: Add a min-height to prevent constant scrolling during chat streaming 2025-05-02 23:45:58 -07:00
oobabooga d08acb4af9 UI: Rename enable_thinking -> Enable thinking 2025-05-02 20:50:52 -07:00
oobabooga 4cea720da8 UI: Remove the "Autoload the model" feature 2025-05-02 16:38:28 -07:00
oobabooga 905afced1c Add a --portable flag to hide things in portable mode 2025-05-02 16:34:29 -07:00
oobabooga 3f26b0408b Fix after 9e3867dc83 2025-05-02 16:17:22 -07:00
oobabooga 9e3867dc83 llama.cpp: Fix manual random seeds 2025-05-02 09:36:15 -07:00
oobabooga b950a0c6db Lint 2025-04-30 20:02:10 -07:00
oobabooga 307d13b540 UI: Minor label change 2025-04-30 18:58:14 -07:00
oobabooga 55283bb8f1 Fix CFG with ExLlamaV2_HF (closes #6937) 2025-04-30 18:43:45 -07:00
oobabooga a6c3ec2299 llama.cpp: Explicitly send cache_prompt = True 2025-04-30 15:24:07 -07:00
oobabooga 195a45c6e1 UI: Make thinking blocks closed by default 2025-04-30 15:12:46 -07:00
oobabooga cd5c32dc19 UI: Fix max_updates_second not working 2025-04-30 14:54:05 -07:00
oobabooga b46ca01340 UI: Set max_updates_second to 12 by default
When the tokens/second at at ~50 and the model is a thinking model,
the markdown rendering for the streaming message becomes a CPU
bottleneck.
2025-04-30 14:53:15 -07:00
oobabooga 771d3d8ed6 Fix getting the llama.cpp logprobs for Qwen3-30B-A3B 2025-04-30 06:48:32 -07:00
oobabooga 1dd4aedbe1 Fix the streaming_llm UI checkbox not being interactive 2025-04-29 05:28:46 -07:00
oobabooga d10bded7f8 UI: Add an enable_thinking option to enable/disable Qwen3 thinking 2025-04-28 22:37:01 -07:00
oobabooga 1ee0acc852 llama.cpp: Make --verbose print the llama-server command 2025-04-28 15:56:25 -07:00
oobabooga 15a29e99f8 Lint 2025-04-27 21:41:34 -07:00
oobabooga be13f5199b UI: Add an info message about how to use Speculative Decoding 2025-04-27 21:40:38 -07:00
oobabooga c6c2855c80 llama.cpp: Remove the timeout while loading models (closes #6907) 2025-04-27 21:22:21 -07:00
oobabooga ee0592473c Fix ExLlamaV3_HF leaking memory (attempt) 2025-04-27 21:04:02 -07:00
oobabooga 70952553c7 Lint 2025-04-26 19:29:08 -07:00
oobabooga 7b80acd524 Fix parsing --extra-flags 2025-04-26 18:40:03 -07:00
oobabooga 943451284f Fix the Notebook tab not loading its default prompt 2025-04-26 18:25:06 -07:00
oobabooga 511eb6aa94 Fix saving settings to settings.yaml 2025-04-26 18:20:00 -07:00
oobabooga 8b83e6f843 Prevent Gradio from saying 'Thank you for being a Gradio user!' 2025-04-26 18:14:57 -07:00
oobabooga 4a32e1f80c UI: show draft_max for ExLlamaV2 2025-04-26 18:01:44 -07:00
oobabooga 0fe3b033d0 Fix parsing of --n_ctx and --max_seq_len (2nd attempt) 2025-04-26 17:52:21 -07:00
oobabooga c4afc0421d Fix parsing of --n_ctx and --max_seq_len 2025-04-26 17:43:53 -07:00
oobabooga 234aba1c50 llama.cpp: Simplify the prompt processing progress indicator
The progress bar was unreliable
2025-04-26 17:33:47 -07:00
oobabooga 4ff91b6588 Better default settings for Speculative Decoding 2025-04-26 17:24:40 -07:00
oobabooga bc55feaf3e Improve host header validation in local mode 2025-04-26 15:42:17 -07:00
oobabooga 3a207e7a57 Improve the --help formatting a bit 2025-04-26 07:31:04 -07:00
oobabooga 6acb0e1bee Change a UI description 2025-04-26 05:13:08 -07:00
oobabooga cbd4d967cc Update a --help message 2025-04-26 05:09:52 -07:00
oobabooga 763a7011c0 Remove an ancient/obsolete migration check 2025-04-26 04:59:05 -07:00
oobabooga d9de14d1f7
Restructure the repository (#6904) 2025-04-26 08:56:54 -03:00
oobabooga d4017fbb6d
ExLlamaV3: Add kv cache quantization (#6903) 2025-04-25 21:32:00 -03:00
oobabooga d4b1e31c49 Use --ctx-size to specify the context size for all loaders
Old flags are still recognized as alternatives.
2025-04-25 16:59:03 -07:00
oobabooga faababc4ea llama.cpp: Add a prompt processing progress bar 2025-04-25 16:42:30 -07:00
oobabooga 877cf44c08 llama.cpp: Add StreamingLLM (--streaming-llm) 2025-04-25 16:21:41 -07:00
oobabooga d35818f4e1
UI: Add a collapsible thinking block to messages with <think> steps (#6902) 2025-04-25 18:02:02 -03:00
oobabooga 98f4c694b9 llama.cpp: Add --extra-flags parameter for passing additional flags to llama-server 2025-04-25 07:32:51 -07:00
oobabooga 5861013e68 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-04-24 20:36:20 -07:00
oobabooga a90df27ff5 UI: Add a greeting when the chat history is empty 2025-04-24 20:33:40 -07:00
oobabooga ae1fe87365
ExLlamaV2: Add speculative decoding (#6899) 2025-04-25 00:11:04 -03:00
Matthew Jenkins 8f2493cc60
Prevent llamacpp defaults from locking up consumer hardware (#6870) 2025-04-24 23:38:57 -03:00
oobabooga 93fd4ad25d llama.cpp: Document the --device-draft syntax 2025-04-24 09:20:11 -07:00
oobabooga f1b64df8dd EXL2: add another torch.cuda.synchronize() call to prevent errors 2025-04-24 09:03:49 -07:00
oobabooga c71a2af5ab Handle CMD_FLAGS.txt in the main code (closes #6896) 2025-04-24 08:21:06 -07:00
oobabooga bfbde73409 Make 'instruct' the default chat mode 2025-04-24 07:08:49 -07:00
oobabooga e99c20bcb0
llama.cpp: Add speculative decoding (#6891) 2025-04-23 20:10:16 -03:00
oobabooga 9424ba17c8 UI: show only part 00001 of multipart GGUF models in the model menu 2025-04-22 19:56:42 -07:00
oobabooga 25cf3600aa Lint 2025-04-22 08:04:02 -07:00
oobabooga 39cbb5fee0 Lint 2025-04-22 08:03:25 -07:00
oobabooga 008c6dd682 Lint 2025-04-22 08:02:37 -07:00
oobabooga 78aeabca89 Fix the transformers loader 2025-04-21 18:33:14 -07:00
oobabooga 8320190184 Fix the exllamav2_HF and exllamav3_HF loaders 2025-04-21 18:32:23 -07:00
oobabooga 15989c2ed8 Make llama.cpp the default loader 2025-04-21 16:36:35 -07:00
oobabooga 86c3ed3218 Small change to the unload_model() function 2025-04-20 20:00:56 -07:00
oobabooga fe8e80e04a Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-04-20 19:09:27 -07:00
oobabooga ff1c00bdd9 llama.cpp: set the random seed manually 2025-04-20 19:08:44 -07:00
Matthew Jenkins d3e7c655e5
Add support for llama-cpp builds from https://github.com/ggml-org/llama.cpp (#6862) 2025-04-20 23:06:24 -03:00
oobabooga e243424ba1 Fix an import 2025-04-20 17:51:28 -07:00
oobabooga 8cfd7f976b Revert "Remove the old --model-menu flag"
This reverts commit 109de34e3b.
2025-04-20 13:35:42 -07:00
oobabooga b3bf7a885d Fix ExLlamaV2_HF and ExLlamaV3_HF after ae02ffc605 2025-04-20 11:32:48 -07:00
oobabooga ae02ffc605
Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
oobabooga 6ba0164c70 Lint 2025-04-19 17:45:21 -07:00
oobabooga 5ab069786b llama.cpp: add back the two encode calls (they are harmless now) 2025-04-19 17:38:36 -07:00
oobabooga b9da5c7e3a Use 127.0.0.1 instead of localhost for faster llama.cpp on Windows 2025-04-19 17:36:04 -07:00
oobabooga 9c9df2063f llama.cpp: fix unicode decoding (closes #6856) 2025-04-19 16:38:15 -07:00
oobabooga ba976d1390 llama.cpp: avoid two 'encode' calls 2025-04-19 16:35:01 -07:00
oobabooga ed42154c78 Revert "llama.cpp: close the connection immediately on 'Stop'"
This reverts commit 5fdebc554b.
2025-04-19 05:32:36 -07:00
oobabooga 5fdebc554b llama.cpp: close the connection immediately on 'Stop' 2025-04-19 04:59:24 -07:00
oobabooga 6589ebeca8 Revert "llama.cpp: new optimization attempt"
This reverts commit e2e73ed22f.
2025-04-18 21:16:21 -07:00
oobabooga e2e73ed22f llama.cpp: new optimization attempt 2025-04-18 21:05:08 -07:00
oobabooga e2e90af6cd llama.cpp: don't include --rope-freq-base in the launch command if null 2025-04-18 20:51:18 -07:00
oobabooga 9f07a1f5d7 llama.cpp: new attempt at optimizing the llama-server connection 2025-04-18 19:30:53 -07:00
oobabooga f727b4a2cc llama.cpp: close the connection properly when generation is cancelled 2025-04-18 19:01:39 -07:00
oobabooga b3342b8dd8 llama.cpp: optimize the llama-server connection 2025-04-18 18:46:36 -07:00
oobabooga 2002590536 Revert "Attempt at making the llama-server streaming more efficient."
This reverts commit 5ad080ff25.
2025-04-18 18:13:54 -07:00
oobabooga 71ae05e0a4 llama.cpp: Fix the sampler priority handling 2025-04-18 18:06:36 -07:00
oobabooga 5ad080ff25 Attempt at making the llama-server streaming more efficient. 2025-04-18 18:04:49 -07:00
oobabooga 4fabd729c9 Fix the API without streaming or without 'sampler_priority' (closes #6851) 2025-04-18 17:25:22 -07:00
oobabooga 5135523429 Fix the new llama.cpp loader failing to unload models 2025-04-18 17:10:26 -07:00
oobabooga caa6afc88b Only show 'GENERATE_PARAMS=...' in the logits endpoint if use_logits is True 2025-04-18 09:57:57 -07:00
oobabooga d00d713ace Rename get_max_context_length to get_vocabulary_size in the new llama.cpp loader 2025-04-18 08:14:15 -07:00
oobabooga c1cc65e82e Lint 2025-04-18 08:06:51 -07:00
oobabooga d68f0fbdf7 Remove obsolete references to llamacpp_HF 2025-04-18 07:46:04 -07:00
oobabooga a0abf93425 Connect --rope-freq-base to the new llama.cpp loader 2025-04-18 06:53:51 -07:00
oobabooga ef9910c767 Fix a bug after c6901aba9f 2025-04-18 06:51:28 -07:00
oobabooga 1c4a2c9a71 Make exllamav3 safer as well 2025-04-18 06:17:58 -07:00
oobabooga c6901aba9f Remove deprecation warning code 2025-04-18 06:05:47 -07:00
oobabooga 8144e1031e Remove deprecated command-line flags 2025-04-18 06:02:28 -07:00
oobabooga ae54d8faaa
New llama.cpp loader (#6846) 2025-04-18 09:59:37 -03:00
oobabooga 5c2f8d828e Fix exllamav2 generating eos randomly after previous fix 2025-04-18 05:42:38 -07:00
oobabooga 2fc58ad935 Consider files with .pt extension in the new model menu function 2025-04-17 23:10:43 -07:00
Googolplexed d78abe480b
Allow for model subfolder organization for GGUF files (#6686)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2025-04-18 02:53:59 -03:00
oobabooga ce9e2d94b1 Revert "Attempt at solving the ExLlamaV2 issue"
This reverts commit c9b3c9dfbf.
2025-04-17 22:03:21 -07:00
oobabooga 5dfab7d363 New attempt at solving the exl2 issue 2025-04-17 22:03:11 -07:00
oobabooga c9b3c9dfbf Attempt at solving the ExLlamaV2 issue 2025-04-17 21:45:15 -07:00
oobabooga 2c2d453c8c Revert "Use ExLlamaV2 (instead of the HF one) for EXL2 models for now"
This reverts commit 0ef1b8f8b4.
2025-04-17 21:31:32 -07:00
oobabooga 0ef1b8f8b4 Use ExLlamaV2 (instead of the HF one) for EXL2 models for now
It doesn't seem to have the "OverflowError" bug
2025-04-17 05:47:40 -07:00
oobabooga 682c78ea42 Add back detection of GPTQ models (closes #6841) 2025-04-11 21:00:42 -07:00
oobabooga 4ed0da74a8 Remove the obsolete 'multimodal' extension 2025-04-09 20:09:48 -07:00
oobabooga 598568b1ed Revert "UI: remove the streaming cursor"
This reverts commit 6ea0206207.
2025-04-09 16:03:14 -07:00
oobabooga 297a406e05 UI: smoother chat streaming
This removes the throttling associated to gr.Textbox that made words appears in chunks rather than one at a time
2025-04-09 16:02:37 -07:00
oobabooga 6ea0206207 UI: remove the streaming cursor 2025-04-09 14:59:34 -07:00
oobabooga 8b8d39ec4e
Add ExLlamaV3 support (#6832) 2025-04-09 00:07:08 -03:00
oobabooga bf48ec8c44 Remove an unnecessary UI message 2025-04-07 17:43:41 -07:00
oobabooga a5855c345c
Set context lengths to at most 8192 by default (to prevent out of memory errors) (#6835) 2025-04-07 21:42:33 -03:00
oobabooga 109de34e3b Remove the old --model-menu flag 2025-03-31 09:24:03 -07:00
oobabooga 758c3f15a5 Lint 2025-03-14 20:04:43 -07:00
oobabooga 5bcd2d7ad0
Add the top N-sigma sampler (#6796) 2025-03-14 16:45:11 -03:00
oobabooga 26317a4c7e Fix jinja2 error while loading c4ai-command-a-03-2025 2025-03-14 10:59:05 -07:00
Kelvie Wong 16fa9215c4
Fix OpenAI API with new param (show_after), closes #6747 (#6749)
---------

Co-authored-by: oobabooga <oobabooga4@gmail.com>
2025-02-18 12:01:30 -03:00
oobabooga dba17c40fc Make transformers 4.49 functional 2025-02-17 17:31:11 -08:00
SamAcctX f28f39792d
update deprecated deepspeed import for transformers 4.46+ (#6725) 2025-02-02 20:41:36 -03:00
oobabooga c6f2c2fd7e UI: style improvements 2025-02-02 15:34:03 -08:00
oobabooga 0360f54ae8 UI: add a "Show after" parameter (to use with DeepSeek </think>) 2025-02-02 15:30:09 -08:00
oobabooga f01cc079b9 Lint 2025-01-29 14:00:59 -08:00
oobabooga 75ff3f3815 UI: Mention common context length values 2025-01-25 08:22:23 -08:00
FP HAM 71a551a622
Add strftime_now to JINJA to sattisfy LLAMA 3.1 and 3.2 (and granite) (#6692) 2025-01-24 11:37:20 -03:00
oobabooga 0485ff20e8 Workaround for convert_to_markdown bug 2025-01-23 06:21:40 -08:00
oobabooga 39799adc47 Add a helpful error message when llama.cpp fails to load the model 2025-01-21 12:49:12 -08:00
oobabooga 5e99dded4e UI: add "Continue" and "Remove" buttons below the last chat message 2025-01-21 09:05:44 -08:00
oobabooga 0258a6f877 Fix the Google Colab notebook 2025-01-16 05:21:18 -08:00