oobabooga
|
70952553c7
|
Lint
|
2025-04-26 19:29:08 -07:00 |
|
oobabooga
|
7b80acd524
|
Fix parsing --extra-flags
|
2025-04-26 18:40:03 -07:00 |
|
oobabooga
|
943451284f
|
Fix the Notebook tab not loading its default prompt
|
2025-04-26 18:25:06 -07:00 |
|
oobabooga
|
511eb6aa94
|
Fix saving settings to settings.yaml
|
2025-04-26 18:20:00 -07:00 |
|
oobabooga
|
8b83e6f843
|
Prevent Gradio from saying 'Thank you for being a Gradio user!'
|
2025-04-26 18:14:57 -07:00 |
|
oobabooga
|
4a32e1f80c
|
UI: show draft_max for ExLlamaV2
|
2025-04-26 18:01:44 -07:00 |
|
oobabooga
|
0fe3b033d0
|
Fix parsing of --n_ctx and --max_seq_len (2nd attempt)
|
2025-04-26 17:52:21 -07:00 |
|
oobabooga
|
c4afc0421d
|
Fix parsing of --n_ctx and --max_seq_len
|
2025-04-26 17:43:53 -07:00 |
|
oobabooga
|
234aba1c50
|
llama.cpp: Simplify the prompt processing progress indicator
The progress bar was unreliable
|
2025-04-26 17:33:47 -07:00 |
|
oobabooga
|
4ff91b6588
|
Better default settings for Speculative Decoding
|
2025-04-26 17:24:40 -07:00 |
|
oobabooga
|
bc55feaf3e
|
Improve host header validation in local mode
|
2025-04-26 15:42:17 -07:00 |
|
oobabooga
|
3a207e7a57
|
Improve the --help formatting a bit
|
2025-04-26 07:31:04 -07:00 |
|
oobabooga
|
6acb0e1bee
|
Change a UI description
|
2025-04-26 05:13:08 -07:00 |
|
oobabooga
|
cbd4d967cc
|
Update a --help message
|
2025-04-26 05:09:52 -07:00 |
|
oobabooga
|
763a7011c0
|
Remove an ancient/obsolete migration check
|
2025-04-26 04:59:05 -07:00 |
|
oobabooga
|
d9de14d1f7
|
Restructure the repository (#6904)
|
2025-04-26 08:56:54 -03:00 |
|
oobabooga
|
d4017fbb6d
|
ExLlamaV3: Add kv cache quantization (#6903)
|
2025-04-25 21:32:00 -03:00 |
|
oobabooga
|
d4b1e31c49
|
Use --ctx-size to specify the context size for all loaders
Old flags are still recognized as alternatives.
|
2025-04-25 16:59:03 -07:00 |
|
oobabooga
|
faababc4ea
|
llama.cpp: Add a prompt processing progress bar
|
2025-04-25 16:42:30 -07:00 |
|
oobabooga
|
877cf44c08
|
llama.cpp: Add StreamingLLM (--streaming-llm)
|
2025-04-25 16:21:41 -07:00 |
|
oobabooga
|
d35818f4e1
|
UI: Add a collapsible thinking block to messages with <think> steps (#6902)
|
2025-04-25 18:02:02 -03:00 |
|
oobabooga
|
98f4c694b9
|
llama.cpp: Add --extra-flags parameter for passing additional flags to llama-server
|
2025-04-25 07:32:51 -07:00 |
|
oobabooga
|
5861013e68
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-04-24 20:36:20 -07:00 |
|
oobabooga
|
a90df27ff5
|
UI: Add a greeting when the chat history is empty
|
2025-04-24 20:33:40 -07:00 |
|
oobabooga
|
ae1fe87365
|
ExLlamaV2: Add speculative decoding (#6899)
|
2025-04-25 00:11:04 -03:00 |
|
Matthew Jenkins
|
8f2493cc60
|
Prevent llamacpp defaults from locking up consumer hardware (#6870)
|
2025-04-24 23:38:57 -03:00 |
|
oobabooga
|
93fd4ad25d
|
llama.cpp: Document the --device-draft syntax
|
2025-04-24 09:20:11 -07:00 |
|
oobabooga
|
f1b64df8dd
|
EXL2: add another torch.cuda.synchronize() call to prevent errors
|
2025-04-24 09:03:49 -07:00 |
|
oobabooga
|
c71a2af5ab
|
Handle CMD_FLAGS.txt in the main code (closes #6896)
|
2025-04-24 08:21:06 -07:00 |
|
oobabooga
|
bfbde73409
|
Make 'instruct' the default chat mode
|
2025-04-24 07:08:49 -07:00 |
|
oobabooga
|
e99c20bcb0
|
llama.cpp: Add speculative decoding (#6891)
|
2025-04-23 20:10:16 -03:00 |
|
oobabooga
|
9424ba17c8
|
UI: show only part 00001 of multipart GGUF models in the model menu
|
2025-04-22 19:56:42 -07:00 |
|
oobabooga
|
25cf3600aa
|
Lint
|
2025-04-22 08:04:02 -07:00 |
|
oobabooga
|
39cbb5fee0
|
Lint
|
2025-04-22 08:03:25 -07:00 |
|
oobabooga
|
008c6dd682
|
Lint
|
2025-04-22 08:02:37 -07:00 |
|
oobabooga
|
78aeabca89
|
Fix the transformers loader
|
2025-04-21 18:33:14 -07:00 |
|
oobabooga
|
8320190184
|
Fix the exllamav2_HF and exllamav3_HF loaders
|
2025-04-21 18:32:23 -07:00 |
|
oobabooga
|
15989c2ed8
|
Make llama.cpp the default loader
|
2025-04-21 16:36:35 -07:00 |
|
oobabooga
|
86c3ed3218
|
Small change to the unload_model() function
|
2025-04-20 20:00:56 -07:00 |
|
oobabooga
|
fe8e80e04a
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-04-20 19:09:27 -07:00 |
|
oobabooga
|
ff1c00bdd9
|
llama.cpp: set the random seed manually
|
2025-04-20 19:08:44 -07:00 |
|
Matthew Jenkins
|
d3e7c655e5
|
Add support for llama-cpp builds from https://github.com/ggml-org/llama.cpp (#6862)
|
2025-04-20 23:06:24 -03:00 |
|
oobabooga
|
e243424ba1
|
Fix an import
|
2025-04-20 17:51:28 -07:00 |
|
oobabooga
|
8cfd7f976b
|
Revert "Remove the old --model-menu flag"
This reverts commit 109de34e3b.
|
2025-04-20 13:35:42 -07:00 |
|
oobabooga
|
b3bf7a885d
|
Fix ExLlamaV2_HF and ExLlamaV3_HF after ae02ffc605
|
2025-04-20 11:32:48 -07:00 |
|
oobabooga
|
ae02ffc605
|
Refactor the transformers loader (#6859)
|
2025-04-20 13:33:47 -03:00 |
|
oobabooga
|
6ba0164c70
|
Lint
|
2025-04-19 17:45:21 -07:00 |
|
oobabooga
|
5ab069786b
|
llama.cpp: add back the two encode calls (they are harmless now)
|
2025-04-19 17:38:36 -07:00 |
|
oobabooga
|
b9da5c7e3a
|
Use 127.0.0.1 instead of localhost for faster llama.cpp on Windows
|
2025-04-19 17:36:04 -07:00 |
|
oobabooga
|
9c9df2063f
|
llama.cpp: fix unicode decoding (closes #6856)
|
2025-04-19 16:38:15 -07:00 |
|
oobabooga
|
ba976d1390
|
llama.cpp: avoid two 'encode' calls
|
2025-04-19 16:35:01 -07:00 |
|
oobabooga
|
ed42154c78
|
Revert "llama.cpp: close the connection immediately on 'Stop'"
This reverts commit 5fdebc554b.
|
2025-04-19 05:32:36 -07:00 |
|
oobabooga
|
5fdebc554b
|
llama.cpp: close the connection immediately on 'Stop'
|
2025-04-19 04:59:24 -07:00 |
|
oobabooga
|
6589ebeca8
|
Revert "llama.cpp: new optimization attempt"
This reverts commit e2e73ed22f.
|
2025-04-18 21:16:21 -07:00 |
|
oobabooga
|
e2e73ed22f
|
llama.cpp: new optimization attempt
|
2025-04-18 21:05:08 -07:00 |
|
oobabooga
|
e2e90af6cd
|
llama.cpp: don't include --rope-freq-base in the launch command if null
|
2025-04-18 20:51:18 -07:00 |
|
oobabooga
|
9f07a1f5d7
|
llama.cpp: new attempt at optimizing the llama-server connection
|
2025-04-18 19:30:53 -07:00 |
|
oobabooga
|
f727b4a2cc
|
llama.cpp: close the connection properly when generation is cancelled
|
2025-04-18 19:01:39 -07:00 |
|
oobabooga
|
b3342b8dd8
|
llama.cpp: optimize the llama-server connection
|
2025-04-18 18:46:36 -07:00 |
|
oobabooga
|
2002590536
|
Revert "Attempt at making the llama-server streaming more efficient."
This reverts commit 5ad080ff25.
|
2025-04-18 18:13:54 -07:00 |
|
oobabooga
|
71ae05e0a4
|
llama.cpp: Fix the sampler priority handling
|
2025-04-18 18:06:36 -07:00 |
|
oobabooga
|
5ad080ff25
|
Attempt at making the llama-server streaming more efficient.
|
2025-04-18 18:04:49 -07:00 |
|
oobabooga
|
4fabd729c9
|
Fix the API without streaming or without 'sampler_priority' (closes #6851)
|
2025-04-18 17:25:22 -07:00 |
|
oobabooga
|
5135523429
|
Fix the new llama.cpp loader failing to unload models
|
2025-04-18 17:10:26 -07:00 |
|
oobabooga
|
caa6afc88b
|
Only show 'GENERATE_PARAMS=...' in the logits endpoint if use_logits is True
|
2025-04-18 09:57:57 -07:00 |
|
oobabooga
|
d00d713ace
|
Rename get_max_context_length to get_vocabulary_size in the new llama.cpp loader
|
2025-04-18 08:14:15 -07:00 |
|
oobabooga
|
c1cc65e82e
|
Lint
|
2025-04-18 08:06:51 -07:00 |
|
oobabooga
|
d68f0fbdf7
|
Remove obsolete references to llamacpp_HF
|
2025-04-18 07:46:04 -07:00 |
|
oobabooga
|
a0abf93425
|
Connect --rope-freq-base to the new llama.cpp loader
|
2025-04-18 06:53:51 -07:00 |
|
oobabooga
|
ef9910c767
|
Fix a bug after c6901aba9f
|
2025-04-18 06:51:28 -07:00 |
|
oobabooga
|
1c4a2c9a71
|
Make exllamav3 safer as well
|
2025-04-18 06:17:58 -07:00 |
|
oobabooga
|
c6901aba9f
|
Remove deprecation warning code
|
2025-04-18 06:05:47 -07:00 |
|
oobabooga
|
8144e1031e
|
Remove deprecated command-line flags
|
2025-04-18 06:02:28 -07:00 |
|
oobabooga
|
ae54d8faaa
|
New llama.cpp loader (#6846)
|
2025-04-18 09:59:37 -03:00 |
|
oobabooga
|
5c2f8d828e
|
Fix exllamav2 generating eos randomly after previous fix
|
2025-04-18 05:42:38 -07:00 |
|
oobabooga
|
2fc58ad935
|
Consider files with .pt extension in the new model menu function
|
2025-04-17 23:10:43 -07:00 |
|
Googolplexed
|
d78abe480b
|
Allow for model subfolder organization for GGUF files (#6686)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2025-04-18 02:53:59 -03:00 |
|
oobabooga
|
ce9e2d94b1
|
Revert "Attempt at solving the ExLlamaV2 issue"
This reverts commit c9b3c9dfbf.
|
2025-04-17 22:03:21 -07:00 |
|
oobabooga
|
5dfab7d363
|
New attempt at solving the exl2 issue
|
2025-04-17 22:03:11 -07:00 |
|
oobabooga
|
c9b3c9dfbf
|
Attempt at solving the ExLlamaV2 issue
|
2025-04-17 21:45:15 -07:00 |
|
oobabooga
|
2c2d453c8c
|
Revert "Use ExLlamaV2 (instead of the HF one) for EXL2 models for now"
This reverts commit 0ef1b8f8b4.
|
2025-04-17 21:31:32 -07:00 |
|
oobabooga
|
0ef1b8f8b4
|
Use ExLlamaV2 (instead of the HF one) for EXL2 models for now
It doesn't seem to have the "OverflowError" bug
|
2025-04-17 05:47:40 -07:00 |
|
oobabooga
|
682c78ea42
|
Add back detection of GPTQ models (closes #6841)
|
2025-04-11 21:00:42 -07:00 |
|
oobabooga
|
4ed0da74a8
|
Remove the obsolete 'multimodal' extension
|
2025-04-09 20:09:48 -07:00 |
|
oobabooga
|
598568b1ed
|
Revert "UI: remove the streaming cursor"
This reverts commit 6ea0206207.
|
2025-04-09 16:03:14 -07:00 |
|
oobabooga
|
297a406e05
|
UI: smoother chat streaming
This removes the throttling associated to gr.Textbox that made words appears in chunks rather than one at a time
|
2025-04-09 16:02:37 -07:00 |
|
oobabooga
|
6ea0206207
|
UI: remove the streaming cursor
|
2025-04-09 14:59:34 -07:00 |
|
oobabooga
|
8b8d39ec4e
|
Add ExLlamaV3 support (#6832)
|
2025-04-09 00:07:08 -03:00 |
|
oobabooga
|
bf48ec8c44
|
Remove an unnecessary UI message
|
2025-04-07 17:43:41 -07:00 |
|
oobabooga
|
a5855c345c
|
Set context lengths to at most 8192 by default (to prevent out of memory errors) (#6835)
|
2025-04-07 21:42:33 -03:00 |
|
oobabooga
|
109de34e3b
|
Remove the old --model-menu flag
|
2025-03-31 09:24:03 -07:00 |
|
oobabooga
|
758c3f15a5
|
Lint
|
2025-03-14 20:04:43 -07:00 |
|
oobabooga
|
5bcd2d7ad0
|
Add the top N-sigma sampler (#6796)
|
2025-03-14 16:45:11 -03:00 |
|
oobabooga
|
26317a4c7e
|
Fix jinja2 error while loading c4ai-command-a-03-2025
|
2025-03-14 10:59:05 -07:00 |
|
Kelvie Wong
|
16fa9215c4
|
Fix OpenAI API with new param (show_after), closes #6747 (#6749)
---------
Co-authored-by: oobabooga <oobabooga4@gmail.com>
|
2025-02-18 12:01:30 -03:00 |
|
oobabooga
|
dba17c40fc
|
Make transformers 4.49 functional
|
2025-02-17 17:31:11 -08:00 |
|
SamAcctX
|
f28f39792d
|
update deprecated deepspeed import for transformers 4.46+ (#6725)
|
2025-02-02 20:41:36 -03:00 |
|
oobabooga
|
c6f2c2fd7e
|
UI: style improvements
|
2025-02-02 15:34:03 -08:00 |
|
oobabooga
|
0360f54ae8
|
UI: add a "Show after" parameter (to use with DeepSeek </think>)
|
2025-02-02 15:30:09 -08:00 |
|
oobabooga
|
f01cc079b9
|
Lint
|
2025-01-29 14:00:59 -08:00 |
|
oobabooga
|
75ff3f3815
|
UI: Mention common context length values
|
2025-01-25 08:22:23 -08:00 |
|
FP HAM
|
71a551a622
|
Add strftime_now to JINJA to sattisfy LLAMA 3.1 and 3.2 (and granite) (#6692)
|
2025-01-24 11:37:20 -03:00 |
|
oobabooga
|
0485ff20e8
|
Workaround for convert_to_markdown bug
|
2025-01-23 06:21:40 -08:00 |
|
oobabooga
|
39799adc47
|
Add a helpful error message when llama.cpp fails to load the model
|
2025-01-21 12:49:12 -08:00 |
|
oobabooga
|
5e99dded4e
|
UI: add "Continue" and "Remove" buttons below the last chat message
|
2025-01-21 09:05:44 -08:00 |
|
oobabooga
|
0258a6f877
|
Fix the Google Colab notebook
|
2025-01-16 05:21:18 -08:00 |
|
oobabooga
|
1ef748fb20
|
Lint
|
2025-01-14 16:44:15 -08:00 |
|
oobabooga
|
f843cb475b
|
UI: update a help message
|
2025-01-14 08:12:51 -08:00 |
|
oobabooga
|
c832953ff7
|
UI: Activate auto_max_new_tokens by default
|
2025-01-14 05:59:55 -08:00 |
|
Underscore
|
53b838d6c5
|
HTML: Fix quote pair RegEx matching for all quote types (#6661)
|
2025-01-13 18:01:50 -03:00 |
|
oobabooga
|
c85e5e58d0
|
UI: move the new morphdom code to a .js file
|
2025-01-13 06:20:42 -08:00 |
|
oobabooga
|
facb4155d4
|
Fix morphdom leaving ghost elements behind
|
2025-01-11 20:57:28 -08:00 |
|
oobabooga
|
a0492ce325
|
Optimize syntax highlighting during chat streaming (#6655)
|
2025-01-11 21:14:10 -03:00 |
|
mamei16
|
f1797f4323
|
Unescape backslashes in html_output (#6648)
|
2025-01-11 18:39:44 -03:00 |
|
oobabooga
|
1b9121e5b8
|
Add a "refresh" button below the last message, add a missing file
|
2025-01-11 12:42:25 -08:00 |
|
oobabooga
|
a5d64b586d
|
Add a "copy" button below each message (#6654)
|
2025-01-11 16:59:21 -03:00 |
|
oobabooga
|
3a722a36c8
|
Use morphdom to make chat streaming 1902381098231% faster (#6653)
|
2025-01-11 12:55:19 -03:00 |
|
oobabooga
|
d2f6c0f65f
|
Update README
|
2025-01-10 13:25:40 -08:00 |
|
oobabooga
|
c393f7650d
|
Update settings-template.yaml, organize modules/shared.py
|
2025-01-10 13:22:18 -08:00 |
|
oobabooga
|
83c426e96b
|
Organize internals (#6646)
|
2025-01-10 18:04:32 -03:00 |
|
oobabooga
|
7fe46764fb
|
Improve the --help message about --tensorcores as well
|
2025-01-10 07:07:41 -08:00 |
|
oobabooga
|
da6d868f58
|
Remove old deprecated flags (~6 months or more)
|
2025-01-09 16:11:46 -08:00 |
|
oobabooga
|
f3c0f964a2
|
Lint
|
2025-01-09 13:18:23 -08:00 |
|
oobabooga
|
3020f2e5ec
|
UI: improve the info message about --tensorcores
|
2025-01-09 12:44:03 -08:00 |
|
oobabooga
|
c08d87b78d
|
Make the huggingface loader more readable
|
2025-01-09 12:23:38 -08:00 |
|
BPplays
|
619265b32c
|
add ipv6 support to the API (#6559)
|
2025-01-09 10:23:44 -03:00 |
|
oobabooga
|
5c89068168
|
UI: add an info message for the new Static KV cache option
|
2025-01-08 17:36:30 -08:00 |
|
nclok1405
|
b9e2ded6d4
|
Added UnicodeDecodeError workaround for modules/llamacpp_model.py (#6040)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2025-01-08 21:17:31 -03:00 |
|
oobabooga
|
91a8a87887
|
Remove obsolete code
|
2025-01-08 15:07:21 -08:00 |
|
oobabooga
|
7157257c3f
|
Remove the AutoGPTQ loader (#6641)
|
2025-01-08 19:28:56 -03:00 |
|
oobabooga
|
c0f600c887
|
Add a --torch-compile flag for transformers
|
2025-01-05 05:47:00 -08:00 |
|
oobabooga
|
11af199aff
|
Add a "Static KV cache" option for transformers
|
2025-01-04 17:52:57 -08:00 |
|
oobabooga
|
3967520e71
|
Connect XTC, DRY, smoothing_factor, and dynatemp to ExLlamaV2 loader (non-HF)
|
2025-01-04 16:25:06 -08:00 |
|
oobabooga
|
049297fa66
|
UI: reduce the size of CSS sent to the UI during streaming
|
2025-01-04 14:09:36 -08:00 |
|
oobabooga
|
0e673a7a42
|
UI: reduce the size of HTML sent to the UI during streaming
|
2025-01-04 11:40:24 -08:00 |
|
mamei16
|
9f24885bd2
|
Sane handling of markdown lists (#6626)
|
2025-01-04 15:41:31 -03:00 |
|
oobabooga
|
4b3e1b3757
|
UI: add a "Search chats" input field
|
2025-01-02 18:46:40 -08:00 |
|
oobabooga
|
b8fc9010fa
|
UI: fix orjson.JSONDecodeError error on page reload
|
2025-01-02 16:57:04 -08:00 |
|
oobabooga
|
75f1b5ccde
|
UI: add a "Branch chat" button
|
2025-01-02 16:24:18 -08:00 |
|
Petr Korolev
|
13c033c745
|
Fix CUDA error on MPS backend during API request (#6572)
---------
Co-authored-by: oobabooga <oobabooga4@gmail.com>
|
2025-01-02 00:06:11 -03:00 |
|
oobabooga
|
725639118a
|
UI: Use a tab length of 2 for lists (rather than 4)
|
2025-01-01 13:53:50 -08:00 |
|
oobabooga
|
7b88724711
|
Make responses start faster by removing unnecessary cleanup calls (#6625)
|
2025-01-01 18:33:38 -03:00 |
|
oobabooga
|
64853f8509
|
Reapply a necessary change that I removed from #6599 (thanks @mamei16!)
|
2024-12-31 14:43:22 -08:00 |
|
mamei16
|
e953af85cd
|
Fix newlines in the markdown renderer (#6599)
---------
Co-authored-by: oobabooga <oobabooga4@gmail.com>
|
2024-12-31 01:04:02 -03:00 |
|
oobabooga
|
39a5c9a49c
|
UI organization (#6618)
|
2024-12-29 11:16:17 -03:00 |
|
oobabooga
|
0490ee620a
|
UI: increase the threshold for a <li> to be considered long (some more)
|
2024-12-19 16:51:34 -08:00 |
|
oobabooga
|
89888bef56
|
UI: increase the threshold for a <li> to be considered long
|
2024-12-19 14:38:36 -08:00 |
|
oobabooga
|
2acec386fc
|
UI: improve the streaming cursor
|
2024-12-19 14:08:56 -08:00 |
|
oobabooga
|
e2fb86e5df
|
UI: further improve the style of lists and headings
|
2024-12-19 13:59:24 -08:00 |
|
oobabooga
|
c48e4622e8
|
UI: update a link
|
2024-12-18 06:28:14 -08:00 |
|