Several small fixes

- Stop llama-server subprocess on model unload instead of relying on GC
- Fix tool_calls[].index being string instead of int in API responses
- Omit tool_calls key from API response when empty per OpenAI spec
- Prevent division by zero when micro_batch_size > batch_size in training
- Copy sampler_priority list before mutating in ExLlamaV3
- Normalize presence/frequency_penalty names for ExLlamaV3 sampler sorting
- Restore original chat_template after training instead of leaving it mutated
This commit is contained in:
oobabooga 2026-03-06 16:52:02 -03:00
parent 044566d42d
commit d03923924a
4 changed files with 16 additions and 4 deletions

View file

@ -339,11 +339,16 @@ class Exllamav3Model:
# 3. Get the priority list and handle temperature_last
default_priority = ['repetition_penalty', 'presence_frequency_penalty', 'top_k', 'top_p', 'min_p', 'temperature']
sampler_priority = state.get('sampler_priority') or default_priority
sampler_priority = list(state.get('sampler_priority') or default_priority)
if state['temperature_last'] and 'temperature' in sampler_priority:
sampler_priority.append(sampler_priority.pop(sampler_priority.index('temperature')))
# The preset system uses separate 'presence_penalty' and
# 'frequency_penalty', but ExLlamaV3 has a single combined
# SS_PresFreqP sampler. Normalize to the combined name.
sampler_priority = ['presence_frequency_penalty' if x in ('presence_penalty', 'frequency_penalty') else x for x in sampler_priority]
# 4. Sort the unordered list based on the priority list
def custom_sort_key(sampler_obj):
class_name = sampler_obj.__class__.__name__