mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2025-12-06 07:12:10 +01:00
commit
17f9c188bd
50
README.md
50
README.md
|
|
@ -15,11 +15,11 @@ Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github.
|
||||||
- Supports multiple local text generation backends, including [llama.cpp](https://github.com/ggerganov/llama.cpp), [Transformers](https://github.com/huggingface/transformers), [ExLlamaV3](https://github.com/turboderp-org/exllamav3), [ExLlamaV2](https://github.com/turboderp-org/exllamav2), and [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) (the latter via its own [Dockerfile](https://github.com/oobabooga/text-generation-webui/blob/main/docker/TensorRT-LLM/Dockerfile)).
|
- Supports multiple local text generation backends, including [llama.cpp](https://github.com/ggerganov/llama.cpp), [Transformers](https://github.com/huggingface/transformers), [ExLlamaV3](https://github.com/turboderp-org/exllamav3), [ExLlamaV2](https://github.com/turboderp-org/exllamav2), and [TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) (the latter via its own [Dockerfile](https://github.com/oobabooga/text-generation-webui/blob/main/docker/TensorRT-LLM/Dockerfile)).
|
||||||
- Easy setup: Choose between **portable builds** (zero setup, just unzip and run) for GGUF models on Windows/Linux/macOS, or the one-click installer that creates a self-contained `installer_files` directory.
|
- Easy setup: Choose between **portable builds** (zero setup, just unzip and run) for GGUF models on Windows/Linux/macOS, or the one-click installer that creates a self-contained `installer_files` directory.
|
||||||
- 100% offline and private, with zero telemetry, external resources, or remote update requests.
|
- 100% offline and private, with zero telemetry, external resources, or remote update requests.
|
||||||
- Automatic prompt formatting using Jinja2 templates. You don't need to ever worry about prompt formats.
|
|
||||||
- **File attachments**: Upload text files, PDF documents, and .docx documents to talk about their contents.
|
- **File attachments**: Upload text files, PDF documents, and .docx documents to talk about their contents.
|
||||||
- **Web search**: Optionally search the internet with LLM-generated queries to add context to the conversation.
|
- **Web search**: Optionally search the internet with LLM-generated queries to add context to the conversation.
|
||||||
- Aesthetic UI with dark and light themes.
|
- Aesthetic UI with dark and light themes.
|
||||||
- `instruct` mode for instruction-following (like ChatGPT), and `chat-instruct`/`chat` modes for talking to custom characters.
|
- `instruct` mode for instruction-following (like ChatGPT), and `chat-instruct`/`chat` modes for talking to custom characters.
|
||||||
|
- Automatic prompt formatting using Jinja2 templates. You don't need to ever worry about prompt formats.
|
||||||
- Edit messages, navigate between message versions, and branch conversations at any point.
|
- Edit messages, navigate between message versions, and branch conversations at any point.
|
||||||
- Multiple sampling parameters and generation options for sophisticated text generation control.
|
- Multiple sampling parameters and generation options for sophisticated text generation control.
|
||||||
- Switch between different models in the UI without restarting.
|
- Switch between different models in the UI without restarting.
|
||||||
|
|
@ -57,7 +57,7 @@ To update, run the update script for your OS: `update_wizard_windows.bat`, `upda
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary>
|
<summary>
|
||||||
Setup details and information about installing manually
|
One-click installer details
|
||||||
</summary>
|
</summary>
|
||||||
|
|
||||||
### One-click-installer
|
### One-click-installer
|
||||||
|
|
@ -67,13 +67,51 @@ The script uses Miniconda to set up a Conda environment in the `installer_files`
|
||||||
If you ever need to install something manually in the `installer_files` environment, you can launch an interactive shell using the cmd script: `cmd_linux.sh`, `cmd_windows.bat`, or `cmd_macos.sh`.
|
If you ever need to install something manually in the `installer_files` environment, you can launch an interactive shell using the cmd script: `cmd_linux.sh`, `cmd_windows.bat`, or `cmd_macos.sh`.
|
||||||
|
|
||||||
* There is no need to run any of those scripts (`start_`, `update_wizard_`, or `cmd_`) as admin/root.
|
* There is no need to run any of those scripts (`start_`, `update_wizard_`, or `cmd_`) as admin/root.
|
||||||
* To install the requirements for extensions, you can use the `extensions_reqs` script for your OS. At the end, this script will install the main requirements for the project to make sure that they take precedence in case of version conflicts.
|
* To install requirements for extensions, it is recommended to use the update wizard script with the "Install/update extensions requirements" option. At the end, this script will install the main requirements for the project to make sure that they take precedence in case of version conflicts.
|
||||||
* For additional instructions about AMD and WSL setup, consult [the documentation](https://github.com/oobabooga/text-generation-webui/wiki).
|
|
||||||
* For automated installation, you can use the `GPU_CHOICE`, `LAUNCH_AFTER_INSTALL`, and `INSTALL_EXTENSIONS` environment variables. For instance: `GPU_CHOICE=A LAUNCH_AFTER_INSTALL=FALSE INSTALL_EXTENSIONS=TRUE ./start_linux.sh`.
|
* For automated installation, you can use the `GPU_CHOICE`, `LAUNCH_AFTER_INSTALL`, and `INSTALL_EXTENSIONS` environment variables. For instance: `GPU_CHOICE=A LAUNCH_AFTER_INSTALL=FALSE INSTALL_EXTENSIONS=TRUE ./start_linux.sh`.
|
||||||
|
|
||||||
### Manual installation using Conda
|
</details>
|
||||||
|
|
||||||
Recommended if you have some experience with the command-line.
|
<details>
|
||||||
|
<summary>
|
||||||
|
Manual portable installation with venv
|
||||||
|
</summary>
|
||||||
|
|
||||||
|
### Manual portable installation with venv
|
||||||
|
|
||||||
|
Very fast setup that should work on any Python 3.9+:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Clone repository
|
||||||
|
git clone https://github.com/oobabooga/text-generation-webui
|
||||||
|
cd text-generation-webui
|
||||||
|
|
||||||
|
# Create virtual environment
|
||||||
|
python -m venv venv
|
||||||
|
|
||||||
|
# Activate virtual environment
|
||||||
|
# On Windows:
|
||||||
|
venv\Scripts\activate
|
||||||
|
# On macOS/Linux:
|
||||||
|
source venv/bin/activate
|
||||||
|
|
||||||
|
# Install dependencies (choose appropriate file under requirements/portable for your hardware)
|
||||||
|
pip install -r requirements/portable/requirements.txt
|
||||||
|
|
||||||
|
# Launch server (basic command)
|
||||||
|
python server.py --portable --api --auto-launch
|
||||||
|
|
||||||
|
# When done working, deactivate
|
||||||
|
deactivate
|
||||||
|
```
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>
|
||||||
|
Manual full installation with conda or docker
|
||||||
|
</summary>
|
||||||
|
|
||||||
|
### Full installation with Conda
|
||||||
|
|
||||||
#### 0. Install Conda
|
#### 0. Install Conda
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -82,6 +82,7 @@ class ModelDownloader:
|
||||||
|
|
||||||
links = []
|
links = []
|
||||||
sha256 = []
|
sha256 = []
|
||||||
|
file_sizes = []
|
||||||
classifications = []
|
classifications = []
|
||||||
has_pytorch = False
|
has_pytorch = False
|
||||||
has_pt = False
|
has_pt = False
|
||||||
|
|
@ -118,8 +119,14 @@ class ModelDownloader:
|
||||||
is_tokenizer = re.match(r"(tokenizer|ice|spiece).*\.model", fname) or is_tiktoken
|
is_tokenizer = re.match(r"(tokenizer|ice|spiece).*\.model", fname) or is_tiktoken
|
||||||
is_text = re.match(r".*\.(txt|json|py|md)", fname) or is_tokenizer
|
is_text = re.match(r".*\.(txt|json|py|md)", fname) or is_tokenizer
|
||||||
if any((is_pytorch, is_safetensors, is_pt, is_gguf, is_tokenizer, is_text)):
|
if any((is_pytorch, is_safetensors, is_pt, is_gguf, is_tokenizer, is_text)):
|
||||||
|
file_size = 0
|
||||||
if 'lfs' in dict[i]:
|
if 'lfs' in dict[i]:
|
||||||
sha256.append([fname, dict[i]['lfs']['oid']])
|
sha256.append([fname, dict[i]['lfs']['oid']])
|
||||||
|
file_size = dict[i]['lfs'].get('size', 0)
|
||||||
|
elif 'size' in dict[i]:
|
||||||
|
file_size = dict[i]['size']
|
||||||
|
|
||||||
|
file_sizes.append(file_size)
|
||||||
|
|
||||||
if is_text:
|
if is_text:
|
||||||
links.append(f"{base}/{model}/resolve/{branch}/{fname}")
|
links.append(f"{base}/{model}/resolve/{branch}/{fname}")
|
||||||
|
|
@ -152,6 +159,7 @@ class ModelDownloader:
|
||||||
for i in range(len(classifications) - 1, -1, -1):
|
for i in range(len(classifications) - 1, -1, -1):
|
||||||
if classifications[i] in ['pytorch', 'pt', 'gguf']:
|
if classifications[i] in ['pytorch', 'pt', 'gguf']:
|
||||||
links.pop(i)
|
links.pop(i)
|
||||||
|
file_sizes.pop(i)
|
||||||
|
|
||||||
# For GGUF, try to download only the Q4_K_M if no specific file is specified.
|
# For GGUF, try to download only the Q4_K_M if no specific file is specified.
|
||||||
if has_gguf and specific_file is None:
|
if has_gguf and specific_file is None:
|
||||||
|
|
@ -164,13 +172,15 @@ class ModelDownloader:
|
||||||
for i in range(len(classifications) - 1, -1, -1):
|
for i in range(len(classifications) - 1, -1, -1):
|
||||||
if 'q4_k_m' not in links[i].lower():
|
if 'q4_k_m' not in links[i].lower():
|
||||||
links.pop(i)
|
links.pop(i)
|
||||||
|
file_sizes.pop(i)
|
||||||
else:
|
else:
|
||||||
for i in range(len(classifications) - 1, -1, -1):
|
for i in range(len(classifications) - 1, -1, -1):
|
||||||
if links[i].lower().endswith('.gguf'):
|
if links[i].lower().endswith('.gguf'):
|
||||||
links.pop(i)
|
links.pop(i)
|
||||||
|
file_sizes.pop(i)
|
||||||
|
|
||||||
is_llamacpp = has_gguf and specific_file is not None
|
is_llamacpp = has_gguf and specific_file is not None
|
||||||
return links, sha256, is_lora, is_llamacpp
|
return links, sha256, is_lora, is_llamacpp, file_sizes
|
||||||
|
|
||||||
def get_output_folder(self, model, branch, is_lora, is_llamacpp=False, model_dir=None):
|
def get_output_folder(self, model, branch, is_lora, is_llamacpp=False, model_dir=None):
|
||||||
if model_dir:
|
if model_dir:
|
||||||
|
|
@ -396,7 +406,7 @@ if __name__ == '__main__':
|
||||||
sys.exit()
|
sys.exit()
|
||||||
|
|
||||||
# Get the download links from Hugging Face
|
# Get the download links from Hugging Face
|
||||||
links, sha256, is_lora, is_llamacpp = downloader.get_download_links_from_huggingface(
|
links, sha256, is_lora, is_llamacpp, file_sizes = downloader.get_download_links_from_huggingface(
|
||||||
model, branch, text_only=args.text_only, specific_file=specific_file, exclude_pattern=exclude_pattern
|
model, branch, text_only=args.text_only, specific_file=specific_file, exclude_pattern=exclude_pattern
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -15,7 +15,16 @@ def get_current_model_info():
|
||||||
|
|
||||||
|
|
||||||
def list_models():
|
def list_models():
|
||||||
return {'model_names': get_available_models()[1:]}
|
return {'model_names': get_available_models()}
|
||||||
|
|
||||||
|
|
||||||
|
def list_models_openai_format():
|
||||||
|
"""Returns model list in OpenAI API format"""
|
||||||
|
model_names = get_available_models()
|
||||||
|
return {
|
||||||
|
"object": "list",
|
||||||
|
"data": [model_info_dict(name) for name in model_names]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
def model_info_dict(model_name: str) -> dict:
|
def model_info_dict(model_name: str) -> dict:
|
||||||
|
|
|
||||||
|
|
@ -180,7 +180,7 @@ async def handle_models(request: Request):
|
||||||
is_list = request.url.path.split('?')[0].split('#')[0] == '/v1/models'
|
is_list = request.url.path.split('?')[0].split('#')[0] == '/v1/models'
|
||||||
|
|
||||||
if is_list:
|
if is_list:
|
||||||
response = OAImodels.list_models()
|
response = OAImodels.list_models_openai_format()
|
||||||
else:
|
else:
|
||||||
model_name = path[len('/v1/models/'):]
|
model_name = path[len('/v1/models/'):]
|
||||||
response = OAImodels.model_info_dict(model_name)
|
response = OAImodels.model_info_dict(model_name)
|
||||||
|
|
|
||||||
|
|
@ -351,3 +351,24 @@ function handleMorphdomUpdate(data) {
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Wait for Gradio to finish setting its styles, then force dark theme
|
||||||
|
const observer = new MutationObserver((mutations) => {
|
||||||
|
mutations.forEach((mutation) => {
|
||||||
|
if (mutation.type === "attributes" &&
|
||||||
|
mutation.target.tagName === "GRADIO-APP" &&
|
||||||
|
mutation.attributeName === "style") {
|
||||||
|
|
||||||
|
// Gradio just set its styles, now force dark theme
|
||||||
|
document.body.classList.add("dark");
|
||||||
|
observer.disconnect();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// Start observing
|
||||||
|
observer.observe(document.documentElement, {
|
||||||
|
attributes: true,
|
||||||
|
subtree: true,
|
||||||
|
attributeFilter: ["style"]
|
||||||
|
});
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,6 @@
|
||||||
import builtins
|
import builtins
|
||||||
import io
|
import io
|
||||||
|
import re
|
||||||
|
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
|
|
@ -62,6 +63,13 @@ def my_open(*args, **kwargs):
|
||||||
'\n </head>'
|
'\n </head>'
|
||||||
)
|
)
|
||||||
|
|
||||||
|
file_contents = re.sub(
|
||||||
|
r'@media \(prefers-color-scheme: dark\) \{\s*body \{([^}]*)\}\s*\}',
|
||||||
|
r'body.dark {\1}',
|
||||||
|
file_contents,
|
||||||
|
flags=re.DOTALL
|
||||||
|
)
|
||||||
|
|
||||||
if len(args) > 1 and args[1] == 'rb':
|
if len(args) > 1 and args[1] == 'rb':
|
||||||
file_contents = file_contents.encode('utf-8')
|
file_contents = file_contents.encode('utf-8')
|
||||||
return io.BytesIO(file_contents)
|
return io.BytesIO(file_contents)
|
||||||
|
|
|
||||||
|
|
@ -77,7 +77,7 @@ def get_model_metadata(model):
|
||||||
model_settings['compress_pos_emb'] = metadata[k]
|
model_settings['compress_pos_emb'] = metadata[k]
|
||||||
elif k.endswith('rope.scaling.factor'):
|
elif k.endswith('rope.scaling.factor'):
|
||||||
model_settings['compress_pos_emb'] = metadata[k]
|
model_settings['compress_pos_emb'] = metadata[k]
|
||||||
elif k.endswith('block_count'):
|
elif k.endswith('.block_count'):
|
||||||
model_settings['gpu_layers'] = metadata[k] + 1
|
model_settings['gpu_layers'] = metadata[k] + 1
|
||||||
model_settings['max_gpu_layers'] = metadata[k] + 1
|
model_settings['max_gpu_layers'] = metadata[k] + 1
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -69,9 +69,9 @@ if not shared.args.old_colors:
|
||||||
border_color_primary='#c5c5d2',
|
border_color_primary='#c5c5d2',
|
||||||
body_text_color_subdued='#484848',
|
body_text_color_subdued='#484848',
|
||||||
background_fill_secondary='#eaeaea',
|
background_fill_secondary='#eaeaea',
|
||||||
background_fill_secondary_dark='var(--selected-item-color-dark)',
|
background_fill_secondary_dark='var(--selected-item-color-dark, #282930)',
|
||||||
background_fill_primary='var(--neutral-50)',
|
background_fill_primary='var(--neutral-50)',
|
||||||
background_fill_primary_dark='var(--darker-gray)',
|
background_fill_primary_dark='var(--darker-gray, #1C1C1D)',
|
||||||
body_background_fill="white",
|
body_background_fill="white",
|
||||||
block_background_fill="transparent",
|
block_background_fill="transparent",
|
||||||
body_text_color='rgb(64, 64, 64)',
|
body_text_color='rgb(64, 64, 64)',
|
||||||
|
|
@ -81,25 +81,25 @@ if not shared.args.old_colors:
|
||||||
button_shadow_hover="none",
|
button_shadow_hover="none",
|
||||||
|
|
||||||
# Dark Mode Colors
|
# Dark Mode Colors
|
||||||
input_background_fill_dark='var(--darker-gray)',
|
input_background_fill_dark='var(--darker-gray, #1C1C1D)',
|
||||||
checkbox_background_color_dark='var(--darker-gray)',
|
checkbox_background_color_dark='var(--darker-gray, #1C1C1D)',
|
||||||
block_background_fill_dark='transparent',
|
block_background_fill_dark='transparent',
|
||||||
block_border_color_dark='transparent',
|
block_border_color_dark='transparent',
|
||||||
input_border_color_dark='var(--border-color-dark)',
|
input_border_color_dark='var(--border-color-dark, #525252)',
|
||||||
input_border_color_focus_dark='var(--border-color-dark)',
|
input_border_color_focus_dark='var(--border-color-dark, #525252)',
|
||||||
checkbox_border_color_dark='var(--border-color-dark)',
|
checkbox_border_color_dark='var(--border-color-dark, #525252)',
|
||||||
border_color_primary_dark='var(--border-color-dark)',
|
border_color_primary_dark='var(--border-color-dark, #525252)',
|
||||||
button_secondary_border_color_dark='var(--border-color-dark)',
|
button_secondary_border_color_dark='var(--border-color-dark, #525252)',
|
||||||
body_background_fill_dark='var(--dark-gray)',
|
body_background_fill_dark='var(--dark-gray, #212125)',
|
||||||
button_primary_background_fill_dark='transparent',
|
button_primary_background_fill_dark='transparent',
|
||||||
button_secondary_background_fill_dark='transparent',
|
button_secondary_background_fill_dark='transparent',
|
||||||
checkbox_label_background_fill_dark='transparent',
|
checkbox_label_background_fill_dark='transparent',
|
||||||
button_cancel_background_fill_dark='transparent',
|
button_cancel_background_fill_dark='transparent',
|
||||||
button_secondary_background_fill_hover_dark='var(--selected-item-color-dark)',
|
button_secondary_background_fill_hover_dark='var(--selected-item-color-dark, #282930)',
|
||||||
checkbox_label_background_fill_hover_dark='var(--selected-item-color-dark)',
|
checkbox_label_background_fill_hover_dark='var(--selected-item-color-dark, #282930)',
|
||||||
table_even_background_fill_dark='var(--darker-gray)',
|
table_even_background_fill_dark='var(--darker-gray, #1C1C1D)',
|
||||||
table_odd_background_fill_dark='var(--selected-item-color-dark)',
|
table_odd_background_fill_dark='var(--selected-item-color-dark, #282930)',
|
||||||
code_background_fill_dark='var(--darker-gray)',
|
code_background_fill_dark='var(--darker-gray, #1C1C1D)',
|
||||||
|
|
||||||
# Shadows and Radius
|
# Shadows and Radius
|
||||||
checkbox_label_shadow='none',
|
checkbox_label_shadow='none',
|
||||||
|
|
|
||||||
|
|
@ -1,4 +1,5 @@
|
||||||
import importlib
|
import importlib
|
||||||
|
import math
|
||||||
import queue
|
import queue
|
||||||
import threading
|
import threading
|
||||||
import traceback
|
import traceback
|
||||||
|
|
@ -244,7 +245,7 @@ def download_model_wrapper(repo_id, specific_file, progress=gr.Progress(), retur
|
||||||
|
|
||||||
model, branch = downloader.sanitize_model_and_branch_names(repo_id, None)
|
model, branch = downloader.sanitize_model_and_branch_names(repo_id, None)
|
||||||
yield "Getting download links from Hugging Face..."
|
yield "Getting download links from Hugging Face..."
|
||||||
links, sha256, is_lora, is_llamacpp = downloader.get_download_links_from_huggingface(model, branch, text_only=False, specific_file=specific_file)
|
links, sha256, is_lora, is_llamacpp, file_sizes = downloader.get_download_links_from_huggingface(model, branch, text_only=False, specific_file=specific_file)
|
||||||
|
|
||||||
if not links:
|
if not links:
|
||||||
yield "No files found to download for the given model/criteria."
|
yield "No files found to download for the given model/criteria."
|
||||||
|
|
@ -254,17 +255,33 @@ def download_model_wrapper(repo_id, specific_file, progress=gr.Progress(), retur
|
||||||
# Check for multiple GGUF files
|
# Check for multiple GGUF files
|
||||||
gguf_files = [link for link in links if link.lower().endswith('.gguf')]
|
gguf_files = [link for link in links if link.lower().endswith('.gguf')]
|
||||||
if len(gguf_files) > 1 and not specific_file:
|
if len(gguf_files) > 1 and not specific_file:
|
||||||
output = "Multiple GGUF files found. Please copy one of the following filenames to the 'File name' field:\n\n```\n"
|
# Sort by size in ascending order
|
||||||
for link in gguf_files:
|
gguf_data = []
|
||||||
output += f"{Path(link).name}\n"
|
for i, link in enumerate(links):
|
||||||
|
if link.lower().endswith('.gguf'):
|
||||||
|
file_size = file_sizes[i]
|
||||||
|
gguf_data.append((file_size, link))
|
||||||
|
|
||||||
|
gguf_data.sort(key=lambda x: x[0])
|
||||||
|
|
||||||
|
output = "Multiple GGUF files found. Please copy one of the following filenames to the 'File name' field above:\n\n```\n"
|
||||||
|
for file_size, link in gguf_data:
|
||||||
|
size_str = format_file_size(file_size)
|
||||||
|
output += f"{size_str} - {Path(link).name}\n"
|
||||||
|
|
||||||
output += "```"
|
output += "```"
|
||||||
yield output
|
yield output
|
||||||
return
|
return
|
||||||
|
|
||||||
if return_links:
|
if return_links:
|
||||||
|
# Sort by size in ascending order
|
||||||
|
file_data = list(zip(file_sizes, links))
|
||||||
|
file_data.sort(key=lambda x: x[0])
|
||||||
|
|
||||||
output = "```\n"
|
output = "```\n"
|
||||||
for link in links:
|
for file_size, link in file_data:
|
||||||
output += f"{Path(link).name}" + "\n"
|
size_str = format_file_size(file_size)
|
||||||
|
output += f"{size_str} - {Path(link).name}\n"
|
||||||
|
|
||||||
output += "```"
|
output += "```"
|
||||||
yield output
|
yield output
|
||||||
|
|
@ -391,3 +408,19 @@ def handle_load_model_event_final(truncation_length, loader, state):
|
||||||
def handle_unload_model_click():
|
def handle_unload_model_click():
|
||||||
unload_model()
|
unload_model()
|
||||||
return "Model unloaded"
|
return "Model unloaded"
|
||||||
|
|
||||||
|
|
||||||
|
def format_file_size(size_bytes):
|
||||||
|
"""Convert bytes to human readable format with 2 decimal places for GB and above"""
|
||||||
|
if size_bytes == 0:
|
||||||
|
return "0 B"
|
||||||
|
|
||||||
|
size_names = ["B", "KB", "MB", "GB", "TB"]
|
||||||
|
i = int(math.floor(math.log(size_bytes, 1024)))
|
||||||
|
p = math.pow(1024, i)
|
||||||
|
s = size_bytes / p
|
||||||
|
|
||||||
|
if i >= 3: # GB or TB
|
||||||
|
return f"{s:.2f} {size_names[i]}"
|
||||||
|
else:
|
||||||
|
return f"{s:.1f} {size_names[i]}"
|
||||||
|
|
|
||||||
|
|
@ -9,7 +9,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pandas
|
pandas
|
||||||
peft==0.15.*
|
peft==0.15.*
|
||||||
Pillow>=9.5.0
|
Pillow>=9.5.0
|
||||||
|
|
|
||||||
|
|
@ -8,7 +8,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pandas
|
pandas
|
||||||
peft==0.15.*
|
peft==0.15.*
|
||||||
Pillow>=9.5.0
|
Pillow>=9.5.0
|
||||||
|
|
|
||||||
|
|
@ -8,7 +8,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pandas
|
pandas
|
||||||
peft==0.15.*
|
peft==0.15.*
|
||||||
Pillow>=9.5.0
|
Pillow>=9.5.0
|
||||||
|
|
|
||||||
|
|
@ -8,7 +8,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pandas
|
pandas
|
||||||
peft==0.15.*
|
peft==0.15.*
|
||||||
Pillow>=9.5.0
|
Pillow>=9.5.0
|
||||||
|
|
|
||||||
|
|
@ -8,7 +8,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pandas
|
pandas
|
||||||
peft==0.15.*
|
peft==0.15.*
|
||||||
Pillow>=9.5.0
|
Pillow>=9.5.0
|
||||||
|
|
|
||||||
|
|
@ -8,7 +8,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pandas
|
pandas
|
||||||
peft==0.15.*
|
peft==0.15.*
|
||||||
Pillow>=9.5.0
|
Pillow>=9.5.0
|
||||||
|
|
|
||||||
|
|
@ -8,7 +8,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pandas
|
pandas
|
||||||
peft==0.15.*
|
peft==0.15.*
|
||||||
Pillow>=9.5.0
|
Pillow>=9.5.0
|
||||||
|
|
|
||||||
|
|
@ -9,7 +9,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pandas
|
pandas
|
||||||
peft==0.15.*
|
peft==0.15.*
|
||||||
Pillow>=9.5.0
|
Pillow>=9.5.0
|
||||||
|
|
|
||||||
|
|
@ -8,7 +8,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pandas
|
pandas
|
||||||
peft==0.15.*
|
peft==0.15.*
|
||||||
Pillow>=9.5.0
|
Pillow>=9.5.0
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pydantic==2.8.2
|
pydantic==2.8.2
|
||||||
PyPDF2==3.0.1
|
PyPDF2==3.0.1
|
||||||
python-docx==1.1.2
|
python-docx==1.1.2
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pydantic==2.8.2
|
pydantic==2.8.2
|
||||||
PyPDF2==3.0.1
|
PyPDF2==3.0.1
|
||||||
python-docx==1.1.2
|
python-docx==1.1.2
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pydantic==2.8.2
|
pydantic==2.8.2
|
||||||
PyPDF2==3.0.1
|
PyPDF2==3.0.1
|
||||||
python-docx==1.1.2
|
python-docx==1.1.2
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pydantic==2.8.2
|
pydantic==2.8.2
|
||||||
PyPDF2==3.0.1
|
PyPDF2==3.0.1
|
||||||
python-docx==1.1.2
|
python-docx==1.1.2
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pydantic==2.8.2
|
pydantic==2.8.2
|
||||||
PyPDF2==3.0.1
|
PyPDF2==3.0.1
|
||||||
python-docx==1.1.2
|
python-docx==1.1.2
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pydantic==2.8.2
|
pydantic==2.8.2
|
||||||
PyPDF2==3.0.1
|
PyPDF2==3.0.1
|
||||||
python-docx==1.1.2
|
python-docx==1.1.2
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pydantic==2.8.2
|
pydantic==2.8.2
|
||||||
PyPDF2==3.0.1
|
PyPDF2==3.0.1
|
||||||
python-docx==1.1.2
|
python-docx==1.1.2
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pydantic==2.8.2
|
pydantic==2.8.2
|
||||||
PyPDF2==3.0.1
|
PyPDF2==3.0.1
|
||||||
python-docx==1.1.2
|
python-docx==1.1.2
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ gradio==4.37.*
|
||||||
html2text==2025.4.15
|
html2text==2025.4.15
|
||||||
jinja2==3.1.6
|
jinja2==3.1.6
|
||||||
markdown
|
markdown
|
||||||
numpy==1.26.*
|
numpy==2.2.*
|
||||||
pydantic==2.8.2
|
pydantic==2.8.2
|
||||||
PyPDF2==3.0.1
|
PyPDF2==3.0.1
|
||||||
python-docx==1.1.2
|
python-docx==1.1.2
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue