mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2026-03-18 03:14:39 +01:00
- Rewrite logprobs output format to match the OpenAI specification for both chat completions and completions endpoints - Fix top_logprobs count being ignored for llama.cpp and ExLlamav3 backends in chat completions (always returned 1 instead of requested N) - Fix non-streaming responses only returning logprobs for the last token instead of all generated tokens (affects all HF-based loaders) - Fix logprobs returning null for non-streaming chat requests on HF loaders - Fix off-by-one returning one extra top alternative on HF loaders |
||
|---|---|---|
| .. | ||
| character_bias | ||
| coqui_tts | ||
| example | ||
| gallery | ||
| google_translate | ||
| long_replies | ||
| ngrok | ||
| openai | ||
| perplexity_colors | ||
| sd_api_pictures | ||
| send_pictures | ||
| silero_tts | ||
| superbooga | ||
| superboogav2 | ||
| whisper_stt | ||