Merge branch 'oobabooga:dev' into dev

This commit is contained in:
Underscore 2026-02-12 13:30:59 -05:00 committed by GitHub
commit e0a72d2389
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
21 changed files with 392 additions and 99 deletions

View file

@ -13,7 +13,7 @@
# Text Generation Web UI
A Gradio web UI for Large Language Models.
Run AI chatbots like ChatGPT on your own computer. **100% private and offline** no subscriptions, no API fees, zero telemetry. Just download, unzip, and run.
[Try the Deep Reason extension](https://oobabooga.gumroad.com/l/deep_reason)
@ -21,38 +21,35 @@ A Gradio web UI for Large Language Models.
|:---:|:---:|
|![Image1](https://github.com/oobabooga/screenshots/raw/main/DEFAULT-3.5.png) | ![Image2](https://github.com/oobabooga/screenshots/raw/main/PARAMETERS-3.5.png) |
## 🔥 News
- The project now supports **image generation**! Including Z-Image-Turbo, 4bit/8bit quantization, `torch.compile`, and LLM-generated prompt variations ([tutorial](https://github.com/oobabooga/text-generation-webui/wiki/Image-Generation-Tutorial)).
## Features
- Supports multiple local text generation backends, including [llama.cpp](https://github.com/ggerganov/llama.cpp), [Transformers](https://github.com/huggingface/transformers), [ExLlamaV3](https://github.com/turboderp-org/exllamav3), [ExLlamaV2](https://github.com/turboderp-org/exllamav2), and [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) (the latter via its own [Dockerfile](https://github.com/oobabooga/text-generation-webui/blob/main/docker/TensorRT-LLM/Dockerfile)).
- Easy setup: Choose between **portable builds** (zero setup, just unzip and run) for GGUF models on Windows/Linux/macOS, or the one-click installer that creates a self-contained `installer_files` directory.
- 100% offline and private, with zero telemetry, external resources, or remote update requests.
- **File attachments**: Upload text files, PDF documents, and .docx documents to talk about their contents.
- **Vision (multimodal models)**: Attach images to messages for visual understanding ([tutorial](https://github.com/oobabooga/text-generation-webui/wiki/Multimodal-Tutorial)).
- **Image generation**: A dedicated tab for `diffusers` models like **Z-Image-Turbo**. Features 4-bit/8-bit quantization and a persistent gallery with metadata ([tutorial](https://github.com/oobabooga/text-generation-webui/wiki/Image-Generation-Tutorial)).
- **Web search**: Optionally search the internet with LLM-generated queries to add context to the conversation.
- Aesthetic UI with dark and light themes.
- Syntax highlighting for code blocks and LaTeX rendering for mathematical expressions.
- Aesthetic UI with dark/light themes, syntax highlighting, and LaTeX rendering.
- Edit messages, navigate between message versions, and branch conversations at any point.
- Switch between models without restarting, with automatic GPU layer allocation.
- Free-form text generation in the Notebook tab without being limited to chat turns.
- `instruct` mode for instruction-following (like ChatGPT), and `chat-instruct`/`chat` modes for talking to custom characters.
- Automatic prompt formatting using Jinja2 templates. You don't need to ever worry about prompt formats.
- Edit messages, navigate between message versions, and branch conversations at any point.
- Multiple sampling parameters and generation options for sophisticated text generation control.
- Switch between different models in the UI without restarting.
- Automatic GPU layers for GGUF models (on NVIDIA GPUs).
- Free-form text generation in the Notebook tab without being limited to chat turns.
- Supports multiple backends including [llama.cpp](https://github.com/ggerganov/llama.cpp), [Transformers](https://github.com/huggingface/transformers), [ExLlamaV3](https://github.com/turboderp-org/exllamav3), [ExLlamaV2](https://github.com/turboderp-org/exllamav2), and [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM).
- OpenAI-compatible API with Chat and Completions endpoints, including tool-calling support see [examples](https://github.com/oobabooga/text-generation-webui/wiki/12-%E2%80%90-OpenAI-API#examples).
- Extension support, with numerous built-in and user-contributed extensions available. See the [wiki](https://github.com/oobabooga/text-generation-webui/wiki/07-%E2%80%90-Extensions) and [extensions directory](https://github.com/oobabooga/text-generation-webui-extensions) for details.
## 🔥 News
- The project now supports **image generation**! Including Z-Image-Turbo, 4bit/8bit quantization, `torch.compile`, and LLM-generated prompt variations ([tutorial](https://github.com/oobabooga/text-generation-webui/wiki/Image-Generation-Tutorial)).
## How to install
#### ✅ Option 1: Portable builds (get started in 1 minute)
No installation needed just download, unzip and run. All dependencies included.
Compatible with GGUF (llama.cpp) models on Windows, Linux, and macOS.
Compatible with GGUF (llama.cpp) models on Windows, Linux, and macOS. [Check what models fit your hardware](https://huggingface.co/spaces/oobabooga/accurate-gguf-vram-calculator).
Download from here: **https://github.com/oobabooga/text-generation-webui/releases**

View file

@ -1645,7 +1645,7 @@ button:focus {
}
#user-description textarea {
height: calc(100vh - 231px) !important;
height: calc(100vh - 334px) !important;
min-height: 90px !important;
}

View file

@ -1 +1,84 @@
class CopyButtonPlugin{constructor(options={}){self.hook=options.hook;self.callback=options.callback;self.lang=options.lang||document.documentElement.lang||"en"}"after:highlightElement"({el,text}){let button=Object.assign(document.createElement("button"),{innerHTML:locales[lang]?.[0]||"Copy",className:"hljs-copy-button"});button.dataset.copied=false;el.parentElement.classList.add("hljs-copy-wrapper");el.parentElement.appendChild(button);el.parentElement.style.setProperty("--hljs-theme-background",window.getComputedStyle(el).backgroundColor);button.onclick=function(){if(!navigator.clipboard)return;let newText=text;if(hook&&typeof hook==="function"){newText=hook(text,el)||text}navigator.clipboard.writeText(newText).then(function(){button.innerHTML=locales[lang]?.[1]||"Copied!";button.dataset.copied=true;let alert=Object.assign(document.createElement("div"),{role:"status",className:"hljs-copy-alert",innerHTML:locales[lang]?.[2]||"Copied to clipboard"});el.parentElement.appendChild(alert);setTimeout(()=>{button.innerHTML=locales[lang]?.[0]||"Copy";button.dataset.copied=false;el.parentElement.removeChild(alert);alert=null},2e3)}).then(function(){if(typeof callback==="function")return callback(newText,el)})}}}if(typeof module!="undefined"){module.exports=CopyButtonPlugin}const locales={en:["Copy","Copied!","Copied to clipboard"],es:["Copiar","¡Copiado!","Copiado al portapapeles"],fr:["Copier","Copié !","Copié dans le presse-papier"],de:["Kopieren","Kopiert!","In die Zwischenablage kopiert"],ja:["コピー","コピーしました!","クリップボードにコピーしました"],ko:["복사","복사됨!","클립보드에 복사됨"],ru:["Копировать","Скопировано!","Скопировано в буфер обмена"],zh:["复制","已复制!","已复制到剪贴板"],"zh-tw":["複製","已複製!","已複製到剪貼簿"]};
function fallbackCopyToClipboard(text) {
return new Promise((resolve, reject) => {
const textArea = document.createElement("textarea");
textArea.value = text;
textArea.style.position = "fixed";
textArea.style.left = "-9999px";
textArea.style.top = "-9999px";
document.body.appendChild(textArea);
textArea.focus();
textArea.select();
try {
const successful = document.execCommand("copy");
document.body.removeChild(textArea);
successful ? resolve() : reject();
} catch (err) {
document.body.removeChild(textArea);
reject(err);
}
});
}
class CopyButtonPlugin {
constructor(options = {}) {
self.hook = options.hook;
self.callback = options.callback;
self.lang = options.lang || document.documentElement.lang || "en";
}
"after:highlightElement"({ el, text }) {
let button = Object.assign(document.createElement("button"), {
innerHTML: locales[lang]?.[0] || "Copy",
className: "hljs-copy-button",
});
button.dataset.copied = false;
el.parentElement.classList.add("hljs-copy-wrapper");
el.parentElement.appendChild(button);
el.parentElement.style.setProperty(
"--hljs-theme-background",
window.getComputedStyle(el).backgroundColor,
);
button.onclick = function () {
let newText = text;
if (hook && typeof hook === "function") {
newText = hook(text, el) || text;
}
const copyPromise =
navigator.clipboard && window.isSecureContext
? navigator.clipboard.writeText(newText)
: fallbackCopyToClipboard(newText);
copyPromise.then(function () {
button.innerHTML = locales[lang]?.[1] || "Copied!";
button.dataset.copied = true;
let alert = Object.assign(document.createElement("div"), {
role: "status",
className: "hljs-copy-alert",
innerHTML: locales[lang]?.[2] || "Copied to clipboard",
});
el.parentElement.appendChild(alert);
setTimeout(() => {
button.innerHTML = locales[lang]?.[0] || "Copy";
button.dataset.copied = false;
el.parentElement.removeChild(alert);
alert = null;
}, 2e3);
})
.then(function () {
if (typeof callback === "function") return callback(newText, el);
});
};
}
}
if (typeof module != "undefined") {
module.exports = CopyButtonPlugin;
}
const locales = {
en: ["Copy", "Copied!", "Copied to clipboard"],
es: ["Copiar", "¡Copiado!", "Copiado al portapapeles"],
fr: ["Copier", "Copié !", "Copié dans le presse-papier"],
de: ["Kopieren", "Kopiert!", "In die Zwischenablage kopiert"],
ja: ["コピー", "コピーしました!", "クリップボードにコピーしました"],
ko: ["복사", "복사됨!", "클립보드에 복사됨"],
ru: ["Копировать", "Скопировано!", "Скопировано в буфер обмена"],
zh: ["复制", "已复制!", "已复制到剪贴板"],
"zh-tw": ["複製", "已複製!", "已複製到剪貼簿"],
};

View file

@ -32,7 +32,12 @@ from modules.text_generation import (
get_encoded_length,
get_max_prompt_length
)
from modules.utils import delete_file, get_available_characters, save_file
from modules.utils import (
delete_file,
get_available_characters,
get_available_users,
save_file
)
from modules.web_search import add_web_search_attachments
@ -1647,6 +1652,150 @@ def delete_character(name, instruct=False):
delete_file(Path(f'user_data/characters/{name}.{extension}'))
def generate_user_pfp_cache(user):
"""Generate cached profile picture for user"""
cache_folder = Path(shared.args.disk_cache_dir)
if not cache_folder.exists():
cache_folder.mkdir()
for path in [Path(f"user_data/users/{user}.{extension}") for extension in ['png', 'jpg', 'jpeg']]:
if path.exists():
original_img = Image.open(path)
# Define file paths
pfp_path = Path(f'{cache_folder}/pfp_me.png')
# Save thumbnail
thumb = make_thumbnail(original_img)
thumb.save(pfp_path, format='PNG')
logger.info(f'User profile picture cached to "{pfp_path}"')
return str(pfp_path)
return None
def load_user(user_name, name1, user_bio):
"""Load user profile from YAML file"""
picture = None
filepath = None
for extension in ["yml", "yaml", "json"]:
filepath = Path(f'user_data/users/{user_name}.{extension}')
if filepath.exists():
break
if filepath is None or not filepath.exists():
logger.error(f"Could not find the user \"{user_name}\" inside user_data/users. No user has been loaded.")
raise ValueError
with open(filepath, 'r', encoding='utf-8') as f:
file_contents = f.read()
extension = filepath.suffix[1:] # Remove the leading dot
data = json.loads(file_contents) if extension == "json" else yaml.safe_load(file_contents)
# Clear existing user picture cache
cache_folder = Path(shared.args.disk_cache_dir)
pfp_path = Path(f"{cache_folder}/pfp_me.png")
if pfp_path.exists():
pfp_path.unlink()
# Generate new picture cache
picture = generate_user_pfp_cache(user_name)
# Get user name
if 'name' in data and data['name'] != '':
name1 = data['name']
# Get user bio
if 'user_bio' in data:
user_bio = data['user_bio']
return name1, user_bio, picture
def generate_user_yaml(name, user_bio):
"""Generate YAML content for user profile"""
data = {
'name': name,
'user_bio': user_bio,
}
return yaml.dump(data, sort_keys=False, width=float("inf"))
def save_user(name, user_bio, picture, filename):
"""Save user profile to YAML file"""
if filename == "":
logger.error("The filename is empty, so the user will not be saved.")
return
# Ensure the users directory exists
users_dir = Path('user_data/users')
users_dir.mkdir(parents=True, exist_ok=True)
data = generate_user_yaml(name, user_bio)
filepath = Path(f'user_data/users/{filename}.yaml')
save_file(filepath, data)
path_to_img = Path(f'user_data/users/{filename}.png')
if picture is not None:
# Copy the image file from its source path to the users folder
shutil.copy(picture, path_to_img)
logger.info(f'Saved user profile picture to {path_to_img}.')
def delete_user(name):
"""Delete user profile files"""
# Check for user data files
for extension in ["yml", "yaml", "json"]:
delete_file(Path(f'user_data/users/{name}.{extension}'))
# Check for user image files
for extension in ["png", "jpg", "jpeg"]:
delete_file(Path(f'user_data/users/{name}.{extension}'))
def update_user_menu_after_deletion(idx):
"""Update user menu after a user is deleted"""
users = get_available_users()
if len(users) == 0:
# Create a default user if none exist
save_user('You', '', None, 'Default')
users = get_available_users()
idx = min(int(idx), len(users) - 1)
idx = max(0, idx)
return gr.update(choices=users, value=users[idx])
def handle_user_menu_change(state):
"""Handle user menu selection change"""
try:
name1, user_bio, picture = load_user(state['user_menu'], state['name1'], state['user_bio'])
return [
name1,
user_bio,
picture
]
except Exception as e:
logger.error(f"Failed to load user '{state['user_menu']}': {e}")
return [
state['name1'],
state['user_bio'],
None
]
def handle_save_user_click(name1):
"""Handle save user button click"""
return [
name1,
gr.update(visible=True)
]
def jinja_template_from_old_format(params, verbose=False):
MASTER_TEMPLATE = """
{%- set ns = namespace(found=false) -%}

View file

@ -108,91 +108,64 @@ def replace_blockquote(m):
return m.group().replace('\n', '\n> ').replace('\\begin{blockquote}', '').replace('\\end{blockquote}', '')
# Thinking block format definitions: (start_tag, end_tag, content_start_tag)
# Use None for start_tag to match from beginning (end-only formats should be listed last)
THINKING_FORMATS = [
('<think>', '</think>', None),
('<|channel|>analysis<|message|>', '<|end|>', '<|start|>assistant<|channel|>final<|message|>'),
('<seed:think>', '</seed:think>', None),
('<|think|>', '<|end|>', '<|content|>'), # Solar Open
(None, '</think>', None), # End-only variant (e.g., Qwen3-next)
]
def extract_thinking_block(string):
"""Extract thinking blocks from the beginning of a string."""
if not string:
return None, string
THINK_START_TAG = "&lt;think&gt;"
THINK_END_TAG = "&lt;/think&gt;"
for start_tag, end_tag, content_tag in THINKING_FORMATS:
end_esc = html.escape(end_tag)
content_esc = html.escape(content_tag) if content_tag else None
# Look for think tag first
start_pos = string.find(THINK_START_TAG)
end_pos = string.find(THINK_END_TAG)
# If think tags found, use existing logic
if start_pos != -1 or end_pos != -1:
# handle missing start or end tags
if start_pos == -1:
if start_tag is None:
# End-only format: require end tag, start from beginning
end_pos = string.find(end_esc)
if end_pos == -1:
continue
thought_start = 0
else:
thought_start = start_pos + len(THINK_START_TAG)
# Normal format: require start tag
start_esc = html.escape(start_tag)
start_pos = string.find(start_esc)
if start_pos == -1:
continue
thought_start = start_pos + len(start_esc)
end_pos = string.find(end_esc, thought_start)
if end_pos == -1:
thought_end = len(string)
content_start = len(string)
else:
thought_end = end_pos
content_start = end_pos + len(THINK_END_TAG)
thinking_content = string[thought_start:thought_end]
remaining_content = string[content_start:]
return thinking_content, remaining_content
# If think tags not found, try GPT-OSS alternative format
ALT_START = "&lt;|channel|&gt;analysis&lt;|message|&gt;"
ALT_END = "&lt;|end|&gt;"
ALT_CONTENT_START = "&lt;|start|&gt;assistant&lt;|channel|&gt;final&lt;|message|&gt;"
alt_start_pos = string.find(ALT_START)
alt_end_pos = string.find(ALT_END)
alt_content_pos = string.find(ALT_CONTENT_START)
if alt_start_pos != -1 or alt_end_pos != -1:
if alt_start_pos == -1:
thought_start = 0
else:
thought_start = alt_start_pos + len(ALT_START)
# If no explicit end tag but content start exists, use content start as end
if alt_end_pos == -1:
if alt_content_pos != -1:
thought_end = alt_content_pos
content_start = alt_content_pos + len(ALT_CONTENT_START)
# End tag missing - check if content tag can serve as fallback
if content_esc:
content_pos = string.find(content_esc, thought_start)
if content_pos != -1:
thought_end = content_pos
content_start = content_pos + len(content_esc)
else:
thought_end = len(string)
content_start = len(string)
else:
thought_end = len(string)
content_start = len(string)
else:
thought_end = alt_end_pos
content_start = alt_content_pos + len(ALT_CONTENT_START) if alt_content_pos != -1 else alt_end_pos + len(ALT_END)
thought_end = end_pos
if content_esc:
content_pos = string.find(content_esc, end_pos)
content_start = content_pos + len(content_esc) if content_pos != -1 else end_pos + len(end_esc)
else:
content_start = end_pos + len(end_esc)
thinking_content = string[thought_start:thought_end]
remaining_content = string[content_start:]
return thinking_content, remaining_content
return string[thought_start:thought_end], string[content_start:]
# Try seed:think format
SEED_START = "&lt;seed:think&gt;"
SEED_END = "&lt;/seed:think&gt;"
seed_start_pos = string.find(SEED_START)
seed_end_pos = string.find(SEED_END)
if seed_start_pos != -1 or seed_end_pos != -1:
if seed_start_pos == -1:
thought_start = 0
else:
thought_start = seed_start_pos + len(SEED_START)
if seed_end_pos == -1:
thought_end = len(string)
content_start = len(string)
else:
thought_end = seed_end_pos
content_start = seed_end_pos + len(SEED_END)
thinking_content = string[thought_start:thought_end]
remaining_content = string[content_start:]
return thinking_content, remaining_content
# Return if no format is found
return None, string

View file

@ -298,10 +298,24 @@ class LlamaServer:
if "bos_token" in response:
self.bos_token = response["bos_token"]
def _find_available_port(self):
"""Find an available port by letting the OS assign one."""
def _is_port_available(self, port):
"""Check if a port is available for use."""
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind(('', 0)) # Bind to port 0 to get an available port
try:
s.bind(('', port))
return True
except OSError:
return False
def _find_available_port(self):
"""Find an available port, preferring main port + 1."""
preferred_port = shared.args.api_port + 1
if self._is_port_available(preferred_port):
return preferred_port
# Fall back to OS-assigned random port
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind(('', 0))
return s.getsockname()[1]
def _start_server(self):

View file

@ -298,6 +298,7 @@ settings = {
# Character settings
'character': 'Assistant',
'user': 'Default',
'name1': 'You',
'name2': 'AI',
'user_bio': '',

View file

@ -251,6 +251,7 @@ def list_interface_input_elements():
'chat_style',
'chat-instruct_command',
'character_menu',
'user_menu',
'name2',
'context',
'greeting',
@ -353,6 +354,8 @@ def save_settings(state, preset, extensions_list, show_controls, theme_state, ma
output['preset'] = preset
output['prompt-notebook'] = state['prompt_menu-default'] if state['show_two_notebook_columns'] else state['prompt_menu-notebook']
output['character'] = state['character_menu']
if 'user_menu' in state and state['user_menu']:
output['user'] = state['user_menu']
output['seed'] = int(output['seed'])
output['show_controls'] = show_controls
output['dark_theme'] = True if theme_state == 'dark' else False
@ -457,6 +460,7 @@ def setup_auto_save():
'chat_style',
'chat-instruct_command',
'character_menu',
'user_menu',
'name1',
'name2',
'context',

View file

@ -137,6 +137,12 @@ def create_character_settings_ui():
shared.gradio['greeting'] = gr.Textbox(value=shared.settings['greeting'], lines=5, label='Greeting', elem_classes=['add_scrollbar'], elem_id="character-greeting")
with gr.Tab("User"):
with gr.Row():
shared.gradio['user_menu'] = gr.Dropdown(value=shared.settings['user'], choices=utils.get_available_users(), label='User', elem_id='user-menu', info='Select a user profile.', elem_classes='slim-dropdown')
ui.create_refresh_button(shared.gradio['user_menu'], lambda: None, lambda: {'choices': utils.get_available_users()}, 'refresh-button', interactive=not mu)
shared.gradio['save_user'] = gr.Button('💾', elem_classes='refresh-button', elem_id="save-user", interactive=not mu)
shared.gradio['delete_user'] = gr.Button('🗑️', elem_classes='refresh-button', interactive=not mu)
shared.gradio['name1'] = gr.Textbox(value=shared.settings['name1'], lines=1, label='Name')
shared.gradio['user_bio'] = gr.Textbox(value=shared.settings['user_bio'], lines=10, label='Description', info='Here you can optionally write a description of yourself.', placeholder='{{user}}\'s personality: ...', elem_classes=['add_scrollbar'], elem_id="user-description")
@ -372,3 +378,11 @@ def create_event_handlers():
gradio('enable_web_search'),
gradio('web_search_row')
)
# User menu event handlers
shared.gradio['user_menu'].change(
ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then(
chat.handle_user_menu_change, gradio('interface_state'), gradio('name1', 'user_bio', 'your_picture'), show_progress=False)
shared.gradio['save_user'].click(chat.handle_save_user_click, gradio('name1'), gradio('save_user_filename', 'user_saver'), show_progress=False)
shared.gradio['delete_user'].click(lambda: gr.update(visible=True), None, gradio('user_deleter'), show_progress=False)

View file

@ -39,6 +39,19 @@ def create_ui():
shared.gradio['delete_character_cancel'] = gr.Button('Cancel', elem_classes="small-button")
shared.gradio['delete_character_confirm'] = gr.Button('Delete', elem_classes="small-button", variant='stop', interactive=not mu)
# User saver/deleter
with gr.Group(visible=False, elem_classes='file-saver') as shared.gradio['user_saver']:
shared.gradio['save_user_filename'] = gr.Textbox(lines=1, label='File name', info='The user profile will be saved to your user_data/users folder with this base filename.')
with gr.Row():
shared.gradio['save_user_cancel'] = gr.Button('Cancel', elem_classes="small-button")
shared.gradio['save_user_confirm'] = gr.Button('Save', elem_classes="small-button", variant='primary', interactive=not mu)
with gr.Group(visible=False, elem_classes='file-saver') as shared.gradio['user_deleter']:
gr.Markdown('Confirm the user deletion?')
with gr.Row():
shared.gradio['delete_user_cancel'] = gr.Button('Cancel', elem_classes="small-button")
shared.gradio['delete_user_confirm'] = gr.Button('Delete', elem_classes="small-button", variant='stop', interactive=not mu)
# Preset saver
with gr.Group(visible=False, elem_classes='file-saver') as shared.gradio['preset_saver']:
shared.gradio['save_preset_filename'] = gr.Textbox(lines=1, label='File name', info='The preset will be saved to your user_data/presets folder with this base filename.')
@ -69,6 +82,12 @@ def create_event_handlers():
shared.gradio['save_character_cancel'].click(lambda: gr.update(visible=False), None, gradio('character_saver'), show_progress=False)
shared.gradio['delete_character_cancel'].click(lambda: gr.update(visible=False), None, gradio('character_deleter'), show_progress=False)
# User save/delete event handlers
shared.gradio['save_user_confirm'].click(handle_save_user_confirm_click, gradio('name1', 'user_bio', 'your_picture', 'save_user_filename'), gradio('user_menu', 'user_saver'), show_progress=False)
shared.gradio['delete_user_confirm'].click(handle_delete_user_confirm_click, gradio('user_menu'), gradio('user_menu', 'user_deleter'), show_progress=False)
shared.gradio['save_user_cancel'].click(lambda: gr.update(visible=False), None, gradio('user_saver'), show_progress=False)
shared.gradio['delete_user_cancel'].click(lambda: gr.update(visible=False), None, gradio('user_deleter'), show_progress=False)
def handle_save_preset_confirm_click(filename, contents):
try:
@ -165,3 +184,33 @@ def handle_delete_grammar_click(grammar_file):
"user_data/grammars/",
gr.update(visible=True)
]
def handle_save_user_confirm_click(name1, user_bio, your_picture, filename):
try:
chat.save_user(name1, user_bio, your_picture, filename)
available_users = utils.get_available_users()
output = gr.update(choices=available_users, value=filename)
except Exception:
output = gr.update()
traceback.print_exc()
return [
output,
gr.update(visible=False)
]
def handle_delete_user_confirm_click(user):
try:
index = str(utils.get_available_users().index(user))
chat.delete_user(user)
output = chat.update_user_menu_after_deletion(index)
except Exception:
output = gr.update()
traceback.print_exc()
return [
output,
gr.update(visible=False)
]

View file

@ -219,6 +219,13 @@ def get_available_characters():
return sorted(set((k.stem for k in paths)), key=natural_keys)
def get_available_users():
users_dir = Path('user_data/users')
users_dir.mkdir(parents=True, exist_ok=True)
paths = (x for x in users_dir.iterdir() if x.suffix in ('.json', '.yaml', '.yml'))
return sorted(set((k.stem for k in paths)), key=natural_keys)
def get_available_instruction_templates():
path = "user_data/instruction-templates"
paths = []

View file

@ -28,7 +28,7 @@ sentencepiece
tensorboard
torchao==0.15.*
transformers==4.57.*
triton-windows==3.5.1.post22; platform_system == "Windows"
triton-windows==3.5.1.post24; platform_system == "Windows"
tqdm
wandb

View file

@ -26,7 +26,7 @@ sentencepiece
tensorboard
torchao==0.15.*
transformers==4.57.*
triton-windows==3.5.1.post22; platform_system == "Windows"
triton-windows==3.5.1.post24; platform_system == "Windows"
tqdm
wandb

View file

@ -26,7 +26,7 @@ sentencepiece
tensorboard
torchao==0.15.*
transformers==4.57.*
triton-windows==3.5.1.post22; platform_system == "Windows"
triton-windows==3.5.1.post24; platform_system == "Windows"
tqdm
wandb

View file

@ -26,7 +26,7 @@ sentencepiece
tensorboard
torchao==0.15.*
transformers==4.57.*
triton-windows==3.5.1.post22; platform_system == "Windows"
triton-windows==3.5.1.post24; platform_system == "Windows"
tqdm
wandb

View file

@ -26,7 +26,7 @@ sentencepiece
tensorboard
torchao==0.15.*
transformers==4.57.*
triton-windows==3.5.1.post22; platform_system == "Windows"
triton-windows==3.5.1.post24; platform_system == "Windows"
tqdm
wandb

View file

@ -26,7 +26,7 @@ sentencepiece
tensorboard
torchao==0.15.*
transformers==4.57.*
triton-windows==3.5.1.post22; platform_system == "Windows"
triton-windows==3.5.1.post24; platform_system == "Windows"
tqdm
wandb

View file

@ -26,7 +26,7 @@ sentencepiece
tensorboard
torchao==0.15.*
transformers==4.57.*
triton-windows==3.5.1.post22; platform_system == "Windows"
triton-windows==3.5.1.post24; platform_system == "Windows"
tqdm
wandb

View file

@ -28,7 +28,7 @@ sentencepiece
tensorboard
torchao==0.15.*
transformers==4.57.*
triton-windows==3.5.1.post22; platform_system == "Windows"
triton-windows==3.5.1.post24; platform_system == "Windows"
tqdm
wandb

View file

@ -26,7 +26,7 @@ sentencepiece
tensorboard
torchao==0.15.*
transformers==4.57.*
triton-windows==3.5.1.post22; platform_system == "Windows"
triton-windows==3.5.1.post24; platform_system == "Windows"
tqdm
wandb

View file

@ -0,0 +1,2 @@
name: You
user_bio: ''