oobabooga
63a1d4afc8
Bump gradio to 4.19 ( #5522 )
2024-03-05 07:32:28 -03:00
kalomaze
cfb25c9b3f
Cubic sampling w/ curve param ( #5551 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-03-03 13:22:21 -03:00
oobabooga
080f7132c0
Revert gradio to 3.50.2 ( #5513 )
2024-02-15 20:40:23 -03:00
oobabooga
7123ac3f77
Remove "Maximum UI updates/second" parameter ( #5507 )
2024-02-14 23:34:30 -03:00
oobabooga
494cc3c5b0
Handle empty sampler priority field, use default values
2024-02-06 07:05:32 -08:00
oobabooga
2a1063eff5
Revert "Remove non-HF ExLlamaV2 loader ( #5431 )"
...
This reverts commit cde000d478 .
2024-02-06 06:21:36 -08:00
oobabooga
8c35fefb3b
Add custom sampler order support ( #5443 )
2024-02-06 11:20:10 -03:00
oobabooga
7073665a10
Truncate long chat completions inputs ( #5439 )
2024-02-05 02:31:24 -03:00
oobabooga
cde000d478
Remove non-HF ExLlamaV2 loader ( #5431 )
2024-02-04 01:15:51 -03:00
kalomaze
b6077b02e4
Quadratic sampling ( #5403 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-02-04 00:20:02 -03:00
lmg-anon
db1da9f98d
Fix logprobs tokens in OpenAI API ( #5339 )
2024-01-22 08:07:42 -03:00
oobabooga
e055967974
Add prompt_lookup_num_tokens parameter ( #5296 )
2024-01-17 17:09:36 -03:00
oobabooga
29c2693ea0
dynatemp_low, dynatemp_high, dynatemp_exponent parameters ( #5209 )
2024-01-08 23:28:35 -03:00
oobabooga
0d07b3a6a1
Add dynamic_temperature_low parameter ( #5198 )
2024-01-07 17:03:47 -03:00
oobabooga
b8a0b3f925
Don't print torch tensors with --verbose
2024-01-07 10:35:55 -08:00
oobabooga
cf820c69c5
Print generation parameters with --verbose (HF only)
2024-01-07 10:06:23 -08:00
kalomaze
48327cc5c4
Dynamic Temperature HF loader support ( #5174 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-01-07 10:36:26 -03:00
oobabooga
2734ce3e4c
Remove RWKV loader ( #5130 )
2023-12-31 02:01:40 -03:00
oobabooga
0e54a09bcb
Remove exllamav1 loaders ( #5128 )
2023-12-31 01:57:06 -03:00
oobabooga
8c60495878
UI: add "Maximum UI updates/second" parameter
2023-12-24 09:17:40 -08:00
zhangningboo
1b8b61b928
Fix output_ids decoding for Qwen/Qwen-7B-Chat ( #5045 )
2023-12-22 23:11:02 -03:00
oobabooga
83cf1a6b67
Fix Yi space issue ( closes #4996 )
2023-12-19 07:54:19 -08:00
oobabooga
12690d3ffc
Better HF grammar implementation ( #4953 )
2023-12-17 02:01:23 -03:00
oobabooga
8513028968
Fix lag in the chat tab during streaming
2023-12-12 13:01:25 -08:00
oobabooga
39d2fe1ed9
Jinja templates for Instruct and Chat ( #4874 )
2023-12-12 17:23:14 -03:00
oobabooga
181743fd97
Fix missing spaces tokenizer issue ( closes #4834 )
2023-12-08 05:16:46 -08:00
Yiximail
1c74b3ab45
Fix partial unicode characters issue ( #4837 )
2023-12-08 09:50:53 -03:00
oobabooga
6430acadde
Minor bug fix after https://github.com/oobabooga/text-generation-webui/pull/4814
2023-12-05 10:08:11 -08:00
oobabooga
0f828ea441
Do not limit API updates/second
2023-12-04 20:45:43 -08:00
oobabooga
9edb193def
Optimize HF text generation ( #4814 )
2023-12-05 00:00:40 -03:00
tsukanov-as
9f7ae6bb2e
fix detection of stopping strings when HTML escaping is used ( #4728 )
2023-11-27 15:42:08 -03:00
oobabooga
1b69694fe9
Add types to the encode/decode/token-count endpoints
2023-11-07 19:32:14 -08:00
oobabooga
ec17a5d2b7
Make OpenAI API the default API ( #4430 )
2023-11-06 02:38:29 -03:00
oobabooga
aa5d671579
Add temperature_last parameter ( #4472 )
2023-11-04 13:09:07 -03:00
kalomaze
367e5e6e43
Implement Min P as a sampler option in HF loaders ( #4449 )
2023-11-02 16:32:51 -03:00
Abhilash Majumder
778a010df8
Intel Gpu support initialization ( #4340 )
2023-10-26 23:39:51 -03:00
tdrussell
72f6fc6923
Rename additive_repetition_penalty to presence_penalty, add frequency_penalty ( #4376 )
2023-10-25 12:10:28 -03:00
tdrussell
4440f87722
Add additive_repetition_penalty sampler setting. ( #3627 )
2023-10-23 02:28:07 -03:00
oobabooga
b88b2b74a6
Experimental Intel Arc transformers support (untested)
2023-10-15 20:51:11 -07:00
Brian Dashore
98fa73a974
Text Generation: stop if EOS token is reached ( #4213 )
2023-10-07 19:46:42 -03:00
oobabooga
ae4ba3007f
Add grammar to transformers and _HF loaders ( #4091 )
2023-10-05 10:01:36 -03:00
oobabooga
869f47fff9
Lint
2023-09-19 13:51:57 -07:00
BadisG
893a72a1c5
Stop generation immediately when using "Maximum tokens/second" ( #3952 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-09-18 14:27:06 -03:00
oobabooga
0ede2965d5
Remove an error message
2023-09-17 18:46:08 -07:00
oobabooga
a069f3904c
Undo part of ad8ac545a5
2023-09-17 08:12:23 -07:00
oobabooga
ad8ac545a5
Tokenization improvements
2023-09-17 07:02:00 -07:00
saltacc
cd08eb0753
token probs for non HF loaders ( #3957 )
2023-09-17 10:42:32 -03:00
oobabooga
ef04138bc0
Improve the UI tokenizer
2023-09-15 19:30:44 -07:00
saltacc
f01b9aa71f
Add customizable ban tokens ( #3899 )
2023-09-15 18:27:27 -03:00
oobabooga
c2a309f56e
Add ExLlamaV2 and ExLlamav2_HF loaders ( #3881 )
2023-09-12 14:33:07 -03:00
oobabooga
47e490c7b4
Set use_cache=True by default for all models
2023-08-30 13:26:27 -07:00
oobabooga
cec8db52e5
Add max_tokens_second param ( #3533 )
2023-08-29 17:44:31 -03:00
oobabooga
2cb07065ec
Fix an escaping bug
2023-08-20 21:50:42 -07:00
oobabooga
a74dd9003f
Fix HTML escaping for perplexity_colors extension
2023-08-20 21:40:22 -07:00
cal066
7a4fcee069
Add ctransformers support ( #3313 )
...
---------
Co-authored-by: cal066 <cal066@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: randoentity <137087500+randoentity@users.noreply.github.com>
2023-08-11 14:41:33 -03:00
oobabooga
65aa11890f
Refactor everything ( #3481 )
2023-08-06 21:49:27 -03:00
oobabooga
0af10ab49b
Add Classifier Free Guidance (CFG) for Transformers/ExLlama ( #3325 )
2023-08-06 17:22:48 -03:00
Pete
f4005164f4
Fix llama.cpp truncation ( #3400 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-08-03 20:01:15 -03:00
oobabooga
e931844fe2
Add auto_max_new_tokens parameter ( #3419 )
2023-08-02 14:52:20 -03:00
oobabooga
75c2dd38cf
Remove flexgen support
2023-07-25 15:15:29 -07:00
appe233
89e0d15cf5
Use 'torch.backends.mps.is_available' to check if mps is supported ( #3164 )
2023-07-17 21:27:18 -03:00
Morgan Schweers
6d1e911577
Add support for logits processors in extensions ( #3029 )
2023-07-13 17:22:41 -03:00
oobabooga
4b1804a438
Implement sessions + add basic multi-user support ( #2991 )
2023-07-04 00:03:30 -03:00
oobabooga
3443219cbc
Add repetition penalty range parameter to transformers ( #2916 )
2023-06-29 13:40:13 -03:00
oobabooga
365b672531
Minor change to prevent future bugs
2023-06-25 01:38:54 -03:00
快乐的我531
e356f69b36
Make stop_everything work with non-streamed generation ( #2848 )
2023-06-24 11:19:16 -03:00
oobabooga
3e80f2aceb
Apply the output extensions only once
...
Relevant for google translate, silero
2023-06-24 10:59:07 -03:00
oobabooga
8bb3bb39b3
Implement stopping string search in string space ( #2847 )
2023-06-24 09:43:00 -03:00
LarryVRH
580c1ee748
Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. ( #2777 )
2023-06-21 15:31:42 -03:00
oobabooga
7f06d551a3
Fix streaming callback
2023-06-16 21:44:56 -03:00
oobabooga
9f40032d32
Add ExLlama support ( #2444 )
2023-06-16 20:35:38 -03:00
oobabooga
7ef6a50e84
Reorganize model loading UI completely ( #2720 )
2023-06-16 19:00:37 -03:00
brandonj60
b04e18d10c
Add Mirostat v2 sampling to transformer models ( #2571 )
2023-06-09 21:26:31 -03:00
oobabooga
00b94847da
Remove softprompt support
2023-06-06 07:42:23 -03:00
oobabooga
9f215523e2
Remove some unused imports
2023-06-06 07:05:46 -03:00
oobabooga
b6c407f51d
Don't stream at more than 24 fps
...
This is a performance optimization
2023-05-31 23:41:42 -03:00
Luis Lopez
9e7204bef4
Add tail-free and top-a sampling ( #2357 )
2023-05-29 21:40:01 -03:00
oobabooga
9ee1e37121
Fix return message when no model is loaded
2023-05-28 22:46:32 -03:00
oobabooga
37d4ad012b
Add a button for rendering markdown for any model
2023-05-25 11:59:27 -03:00
flurb18
d37a28730d
Beginning of multi-user support ( #2262 )
...
Adds a lock to generate_reply
2023-05-24 09:38:20 -03:00
oobabooga
c0fd7f3257
Add mirostat parameters for llama.cpp ( #2287 )
2023-05-22 19:37:24 -03:00
oobabooga
e116d31180
Prevent unwanted log messages from modules
2023-05-21 22:42:34 -03:00
oobabooga
8ac3636966
Add epsilon_cutoff/eta_cutoff parameters ( #2258 )
2023-05-21 15:11:57 -03:00
Konstantin Gukov
1b52bddfcc
Mitigate UnboundLocalError ( #2136 )
2023-05-19 14:46:18 -03:00
oobabooga
71693161eb
Better handle spaces in LlamaTokenizer
2023-05-11 17:55:50 -03:00
oobabooga
7221d1389a
Fix a bug
2023-05-11 17:11:10 -03:00
oobabooga
0d36c18f5d
Always return only the new tokens in generation functions
2023-05-11 17:07:20 -03:00
oobabooga
638c6a65a2
Refactor chat functions ( #2003 )
2023-05-11 15:37:04 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) ( #1741 )
2023-05-09 20:18:02 -03:00
IJumpAround
020fe7b50b
Remove mutable defaults from function signature. ( #1663 )
2023-05-08 22:55:41 -03:00
oobabooga
8aafb1f796
Refactor text_generation.py, add support for custom generation functions ( #1817 )
2023-05-05 18:53:03 -03:00
oobabooga
f673f4a4ca
Change --verbose behavior
2023-05-04 15:56:06 -03:00
oobabooga
95d04d6a8d
Better warning messages
2023-05-03 21:43:17 -03:00
Wojtab
80c2f25131
LLaVA: small fixes ( #1664 )
...
* change multimodal projector to the correct one
* remove reference to custom stopping strings from readme
* fix stopping strings if tokenizer extension adds/removes tokens
* add API example
* LLaVA 7B just dropped, add to readme that there is no support for it currently
2023-05-02 23:12:22 -03:00
Carl Kenner
2f1a2846d1
Verbose should always print special tokens in input ( #1707 )
2023-05-02 01:24:56 -03:00
oobabooga
15940e762e
Fix missing initial space for LlamaTokenizer
2023-04-25 22:47:23 -03:00
Vincent Brouwers
92cdb4f22b
Seq2Seq support (including FLAN-T5) ( #1535 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-25 22:39:04 -03:00
oobabooga
1a0c12c6f2
Refactor text-generation.py a bit
2023-04-24 19:24:12 -03:00
Wojtab
12212cf6be
LLaVA support ( #1487 )
2023-04-23 20:32:22 -03:00
oobabooga
fcb594b90e
Don't require llama.cpp models to be placed in subfolders
2023-04-22 14:56:48 -03:00