mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2026-03-18 03:14:39 +01:00
- Return proper OpenAI error format ({"error": {...}}) instead of HTTP 500 for validation errors
- Send data: [DONE] at the end of SSE streams
- Fix finish_reason so "tool_calls" takes priority over "length"
- Stop including usage in streaming chunks when include_usage is not set
- Handle "developer" role in messages (treated same as "system")
- Add logprobs and top_logprobs parameters for chat completions
- Fix chat completions logprobs not working with llama.cpp and ExLlamav3 backends
- Add max_completion_tokens as an alias for max_tokens in chat completions
|
||
|---|---|---|
| .. | ||
| character_bias | ||
| coqui_tts | ||
| example | ||
| gallery | ||
| google_translate | ||
| long_replies | ||
| ngrok | ||
| openai | ||
| perplexity_colors | ||
| sd_api_pictures | ||
| send_pictures | ||
| silero_tts | ||
| superbooga | ||
| superboogav2 | ||
| whisper_stt | ||