This is not the proper way to emulate non-atomic GET tranfers, as it makes it seems as if PUT atomic tranfers arent atomic. (TODO)
Incomplete GET cache line accesses still do not verify data though.
Always report available, in realhw this is just a hint if the previous tag update hasnt been checked yet by the MFC, avoiding blocking writes and allowing the SPU to execute some code while it processes the previous update request.
Except for MFC_TAG_UPDATE_IMMEDIATE, where it also waits for MFC to process it.
- Implement vm::reservation_trylock, optimized locking on reservation stores with no waiting. Always fail if reservation lock bitsa are set.
- Make SPU accurate GET transfers on non-TSX not modify reservation lock bits.
- Add some optimization regarding to unmodified data reservations writes.
* sys_spu: Fix race in sys_spu_thread_group_destroy and other minor fixes
* SPU: Wait for all threads to have error codes if exited by sys_spu_thread_exit
On last thread in group to run.
* sys_spu: Fix sys_spu_thread_group_start
* fixup ad fix sys_spu_thread_group_terminate
idk why "- !group->running" was put in the first place but its probably no longer relevant due to other changes and was causing other issues such as not always waiting for last SPU thread to set group state to INITIALIZED.
Enables trivial synchronization between shared mem.
Reduces memory usage, but potentially degrades performance.
Rename an overload of vm::passive_lock to vm::range_lock.
In some very unscientific benchmark:
spu_thread::do_dma_transfer() was taking 2.27% of my CPU before, now
0.07%, while __memmove_avx_unaligned_erms() was taking 1.47% and now
2.88%, which added makes about 0.8% saved.
This race could lead to spu status bits indicate RUNNING status, but cpu state being stopped.
Fix it by making sure cpu state is set before spu status.
* Removed wrong code in sys_spu_thread_group_terminate.
* SPU Thread ID is accurate, including 5th thread id "rule".
* Fixed possible use-after-free access of spu_thread::group member.
* RawSPU ID management simplified.
This replaces the totally messed up PR #6728
Some games make heavy use of getllar() & putllc() without even changing data.
In this case avoid unneccesary heavy locking of the PPU threads on non-TSX
hosts.
* Ignore more warnings
These are intentional
* Signed/unsigned mismatch when comparing
* Explictly cast values
* Intentionally discard a nodiscard value
* Change ppu_tid to u32
* Do not use POSIX function name on Windows
* Qt: Use horizontalAdvance instead of width
* Change progress variables to u32
With the option enabled GET commands are blocked until the current PUTLLC/PUTLLUC executer on that address finishes
Additional improvements:
- Minor race fix of sys_ppu_thread_exit (wait until the writer finishes)
- Max number of ppu threads bumped to 8
vm::spu max address was overflowing resulting in issues, so cast to u64 where needed. Fixes#6145.
Use vm::get_addr instead of manually substructing vm::base(0) from pointer in texture cache code.
Prefer std::atomic_thread_fence over _mm_?fence(), adjust usage to be more correct.
Used sequantially consistent ordering in semaphore_release for TSX path as well.
Improved memory ordering for sys_rsx_context_iounmap/map.
Fixed sync bugs in HLE gcm because of not using atomic instructions.
Use release memory barrier in lwsync for PPU LLVM, according to this xbox360 programming guide lwsync is a hw release memory barrier.
Also use release barrier where lwsync was originally used in liblv2 sys_lwmutex and cellSync.
Use acquire barrier for isync instruction, see https://devblogs.microsoft.com/oldnewthing/20180814-00/?p=99485
Prefer vm::ptr<>::ptr over vm::get_addr.
Prefer vm::_ptr/base over vm::g_base_addr with offset.
Added methods atomic_t<>::bts and atomic_t<>::btr .
Removed obsolute rsx:🧵:Read/WriteIO32 methods.
Removed wrong check in semaphore_release.
Added handling for PUTRx commands for RawSPU MFC proxy.
Prefer overloaded methods of v128 instead of _mm_... in VPKSHUS ppu interpreter precise.
Fixed more potential overflows that may result in wrong behaviour.
Added io/size alignment check for sys_rsx_context_iounmap.
Added rsx::constants::local_mem_base which represents RSX local memory base address.
Removed obsolute rsx:🧵:main_mem_addr/ioSize/ioAddress members.
- Add vm_locking.h and vm_reservation.h and move relevant functions
and types to these headers.
- Change include order and make vm_ptr.h, vm_var.h and vm_ref.h headers
usable invidually and them including vm.h instead of other way around
- Because usage of vm::ptr now requires including vm_ptr.h instead of
vm.h updated multiple #includes
- Added additional #includes to vm_reservation.h and vm_locking to
where vm::reservation_* and locking related functions are used
Make use of lock bits in reservation counters.
On PPU, fallback to compare_and_swap instead of desperate retry.
On SPU, lighten write set on retry by 'locking' outside of the transaction.
This handles hypothetical situation when AVX is disabled system-wise.
Also refactored register use, to match Windows path with Linux path.
This reduces read set a little at the cost of stack use.
* Fix SPU LR event setting in atomic commands according to hw test
* MFC: increment timestamp for PUT cmd in non-tsx path
* MFC: fix reservation lost test on non-tsx path in regard to the lock bit
* Reservation notification moved out of writer_lock scope to reduce its lifetime
* Use passive_lock/unlock in ppu atomic inctrustions to reduce redundancy
* Lock only once for dma transfers (non-TSX)
* Don't use RDTSC in reservation update logic
* Remove MFC cmd args passing to process_mfc_cmd
* Reorder check_state cpu_flag::memory check for faster unlocking
* Specialization for 128-byte data copy in SPU dma transfers
* Implement memory range locks and isolate PPU and SPU passive lock logic