Commit graph

398 commits

Author SHA1 Message Date
Nick Renieris 1231274e0f CPUThread: Split dump() info to separate methods 2020-04-03 01:36:35 +01:00
Emmanuel Gil Peyrot 425e032a62 SPU: Copy with memcpy() instead of hand-rolled SSE2
In some very unscientific benchmark:
spu_thread::do_dma_transfer() was taking 2.27% of my CPU before, now
0.07%, while __memmove_avx_unaligned_erms() was taking 1.47% and now
2.88%, which added makes about 0.8% saved.
2020-03-22 15:51:03 +03:00
Nekotekina 6a2571d0e1 SPU: print current chunk hash in dump 2020-03-18 18:28:46 +03:00
Eladash 9e14e835e7 Fix sys_raw_spu_destroy 2020-03-16 21:06:33 +03:00
Eladash c11074a128 RawSPU: fix race condition in RunCntl stop request 2020-02-29 21:54:54 +03:00
Nekotekina 65eeee0f4c Remove cancerous lf_value<>
Replace thread names (generic, PPU, SPU) with new shared pointers.
Devirtualize cpu_thread::get_name (used in single case).
2020-02-28 18:54:46 +03:00
Nekotekina 972e0ab31d Remove -Wno-reorder and make it an error 2020-02-21 15:20:34 +03:00
Nekotekina 92e3eaf3ff Fix signed-unsigned comparisons and mark warning as error (part 2). 2020-02-19 22:54:58 +03:00
Nekotekina 771eff273b First part of fixing sign-compare warning (inside be_t). 2020-02-19 22:54:58 +03:00
Eladash df8d0cde4a RSX/SPU: Accurate reservation access 2020-02-19 18:11:30 +00:00
Eladash 727d783959 RawSPU: protect NPC from writes/reads in running state 2020-02-18 18:09:10 +00:00
Eladash fad8b38b28 sys_spu: protect sys_spu_image members in kernel mode
Save relevant info in idm, set sys_spu_image segs and nsegs members to 0.
2020-02-18 18:09:10 +00:00
Megamouse fe75311be2 move config structs to own files and clean up some headers 2020-02-17 15:08:17 +03:00
eladash f901846acb RawSPU: execute MFC proxy cmd after reading CMDStatus
Implement MFC proxy argument sequence checking.
2020-02-06 20:43:38 +00:00
Eladash 37513b1898 SPU reservations: Do not access violate under vm::writer_lock
TODO: Throw exception when encountering page faults notification enabled memory
2020-02-06 00:27:17 +00:00
Eladash f8b3c48af7
sys_spu: Implement proper SPU group flags (#7320)
* sys_spu: Implement proper SPU group flags
2020-02-05 20:46:05 +00:00
Eladash 049e392a97 Make preferred spu threads dynamically adjustable 2020-02-05 10:06:07 +00:00
Nekotekina c0f80cfe7a Use attributes for LIKELY/UNLIKELY
Remove LIKELY/UNLIKELY macro.
2020-02-05 10:42:34 +03:00
Nekotekina 6dfd97f0b6 Modernize SPU logging (spu_log variable) and remove log legacy
Remove legacy macro (LOG_ERROR, etc)
2020-02-01 11:52:52 +03:00
Eladash fe381b8581 SPU: Add SPU LS to debugger 2020-01-21 16:45:41 +03:00
Nekotekina ddda09607d SPU: fixup for STOP 0w0 2020-01-21 16:32:00 +03:00
Eladash 9993df9b8b RawSPU: fix race between spu start and stop
This race could lead to spu status bits indicate RUNNING status, but cpu state being stopped.
Fix it by making sure cpu state is set before spu status.
2020-01-21 14:08:39 +03:00
Nekotekina 98a8eeaac2 SPU: properly support STOP 0x0 instruction 2020-01-20 23:40:10 +03:00
Eladash 765bd6b6c6 SPU: Optimize gpr reset for MSVC 2020-01-11 22:56:46 +03:00
Eladash b59a825e48 Minor fixup after #6894 2019-12-30 23:46:45 +03:00
Eladash 45cff1219c Allow sys_raw_spu_create_tag to be called more than once 2019-12-30 23:46:45 +03:00
Eladash 5631382623 sys_spu: Fix SPU Thread Id
* Removed wrong code in sys_spu_thread_group_terminate.
* SPU Thread ID is accurate, including 5th thread id "rule".
* Fixed possible use-after-free access of spu_thread::group member.
* RawSPU ID management simplified.
2019-12-06 19:59:29 +03:00
Nekotekina 185c067d5b C-style cast cleanup V 2019-12-03 17:23:00 +03:00
Markus Stockhausen cd6b6c8a4f Lightweight putllc() for non-TSX if no data changed
This replaces the totally messed up PR #6728

Some games make heavy use of getllar() & putllc() without even changing data.
In this case avoid unneccesary heavy locking of the PPU threads on non-TSX
hosts.
2019-11-19 18:10:29 +03:00
Nekotekina ccac9d4777 Remove throwing and catching cpu_flag::stop
Since there is spu_runtime::g_escape function now.
2019-11-08 19:27:11 +03:00
Eladash f41f5054f7 sys_spu Fixup after #6864 2019-10-29 23:13:16 +03:00
Eladash b99992d570 sys_spu: Fix SNR and Inbound Mailbox state reset
Also remove bugged ESTAT check at sys_spu_thread_write_spu_mb.
2019-10-29 18:34:28 +03:00
Nekotekina 9ac6ef6494 SPU: cleanup former OOM handling
Remove cpu_flag::jit_return.
It's obsolete now, and worked only in SPU ASMJIT anyway.
2019-10-26 21:24:12 +03:00
MSuih f3ed26e9db Small warnings cleanup (#6671)
* Ignore more warnings

These are intentional

* Signed/unsigned mismatch when comparing

* Explictly cast values

* Intentionally discard a nodiscard value

* Change ppu_tid to u32

* Do not use POSIX function name on Windows

* Qt: Use horizontalAdvance instead of width

* Change progress variables to u32
2019-10-25 13:32:21 +03:00
Nekotekina b329bb604c SPU LLVM: implemented asynchronous compilation
Implemented interpreter-based pre-recompiler.
Interpreter functions are build with SPU LLVM.
2019-10-21 19:29:34 +03:00
eladash 95752607ea sys_spu: Don't reset snr config at group_start()
Also first check for EINVAL in sys_spu_thread_set_spu_cfg
2019-10-16 21:11:29 +03:00
Nekotekina 49e96b39dd [SPU, TSX] Fix reservation corruption in PUTLLC
Change reservation locking logic.
2019-10-12 15:41:24 +03:00
Eladash c2278fb879 spu: Mask SRR0 at write 2019-10-08 02:52:33 +03:00
Nekotekina feee3838eb Revert "Revert "Remove shared_cond and simplify reservation waiting""
This reverts commit b70c08a2e8.
2019-09-24 05:01:00 +03:00
Nekotekina b70c08a2e8 Revert "Remove shared_cond and simplify reservation waiting"
This reverts commit 0a96497e13.
2019-09-14 00:02:48 +03:00
Nekotekina 2fc8844315 atomic.hpp: add atomic wait mask support 2019-09-13 15:53:34 +03:00
Nekotekina 0a96497e13 Remove shared_cond and simplify reservation waiting
Use atomic wait for reservations
Cleanup some obsolete code
2019-09-10 19:25:39 +03:00
Eladash 0d88f037ff Add new accuracy control for PUTLLUC accuracy setting (non-TSX)
With the option enabled GET commands are blocked until the current PUTLLC/PUTLLUC executer on that address finishes

Additional improvements:
- Minor race fix of sys_ppu_thread_exit (wait until the writer finishes)
- Max number of ppu threads bumped to 8
2019-08-17 00:42:46 +03:00
Eladash 49aefc0795 Prefetch MFC list elements (#5345)
* Prefetch mfc list elemets to protect from overwriting

Also move some stuff away from command processing such as a few constant arguments setup
2019-07-24 01:13:45 +03:00
Nekotekina cb5c26f2b5 Fix SPU Interpreter regression after #6147 2019-07-15 16:34:34 +03:00
Nekotekina 22e4ef147a SPU TSX: fix "Preferred SPU Threads" 2019-07-14 17:33:20 +03:00
Eladash d7a2d42d8f MFC: Fix Tag Status report for sync/eieio/barrier commands 2019-07-10 17:35:39 +03:00
msuih 146e43b6ec Do not use negative unsigned literals 2019-07-01 04:33:23 +03:00
Eladash 43f919c04b Fixup after #6143 (#6146)
vm::spu max address was overflowing resulting in issues, so cast to u64 where needed. Fixes #6145.
    Use vm::get_addr instead of manually substructing vm::base(0) from pointer in texture cache code.
    Prefer std::atomic_thread_fence over _mm_?fence(), adjust usage to be more correct.
    Used sequantially consistent ordering in semaphore_release for TSX path as well.
    Improved memory ordering for sys_rsx_context_iounmap/map.
    Fixed sync bugs in HLE gcm because of not using atomic instructions.
    Use release memory barrier in lwsync for PPU LLVM, according to this xbox360 programming guide lwsync is a hw release memory barrier.
    Also use release barrier where lwsync was originally used in liblv2 sys_lwmutex and cellSync.
    Use acquire barrier for isync instruction, see https://devblogs.microsoft.com/oldnewthing/20180814-00/?p=99485
2019-06-29 18:48:42 +03:00
Eladash 1ee7b91646 Refactoring (#6143)
Prefer vm::ptr<>::ptr over vm::get_addr.
    Prefer vm::_ptr/base over vm::g_base_addr with offset.
    Added methods atomic_t<>::bts and atomic_t<>::btr .
    Removed obsolute rsx:🧵:Read/WriteIO32 methods.
    Removed wrong check in semaphore_release.
    Added handling for PUTRx commands for RawSPU MFC proxy.
    Prefer overloaded methods of v128 instead of _mm_... in VPKSHUS ppu interpreter precise.
    Fixed more potential overflows that may result in wrong behaviour.
    Added io/size alignment check for sys_rsx_context_iounmap.
    Added rsx::constants::local_mem_base which represents RSX local memory base address.
    Removed obsolute rsx:🧵:main_mem_addr/ioSize/ioAddress members.
2019-06-29 01:27:49 +03:00
JohnHolmesII be521ff0ab Fix warnings related to parentheses 2019-06-25 20:36:32 -07:00
kd-11 4ff77a8555 rsx: Improve balancing of the offloader thread
- Use two counters to avoid atomic operations
- Yield instead of sleeping because some games are very sensitive to timing
2019-06-25 20:50:54 +03:00
Lassi Hämäläinen 499035512b Split Emu/Memory into more logical headers
- Add vm_locking.h and vm_reservation.h and move relevant functions
  and types to these headers.
- Change include order and make vm_ptr.h, vm_var.h and vm_ref.h headers
  usable invidually and them including vm.h instead of other way around
- Because usage of vm::ptr now requires including vm_ptr.h instead of
  vm.h updated multiple #includes
- Added additional #includes to vm_reservation.h and vm_locking to
  where vm::reservation_* and locking related functions are used
2019-06-25 17:11:10 +03:00
Nekotekina b9b591bf02 Fix SPU Loop Detection 2019-06-20 14:46:32 +03:00
Nekotekina 5d45a3e47d Implement cpu_thread::suspend_all
Remove Accurate PUTLLC option.
Implement fallback path for SPU transactions.
2019-06-19 20:36:12 +03:00
Nekotekina 9dc0368079 Rename cond_x16 to shared_cond
Extend capacity from 16 to 32.
Remove redundant m_total counter.
2019-06-04 16:37:50 +03:00
scribam 790962425c Fix some "-Wpedantic" warnings 2019-06-01 22:59:23 +03:00
Nekotekina 71b71537a0 SPU TSX: implement Accurate PUTLLC option
Allow spurious PUTLLC failure if disabled (default).
2019-05-25 22:23:23 +03:00
Nekotekina b839cc9d5b SPU TSX: restore busy_wait in GETLLAR 2019-05-25 21:41:11 +03:00
Nekotekina 7de3c410cf SPU/PPU: update reservation logic on TSX path transactions
Make use of lock bits in reservation counters.
On PPU, fallback to compare_and_swap instead of desperate retry.
On SPU, lighten write set on retry by 'locking' outside of the transaction.
2019-05-20 14:32:50 +03:00
Nekotekina 9abb303569 vm: expand reservation lock bit area to 7 bit
This is minor change.
2019-05-19 17:46:55 +03:00
Nekotekina f33b81545e SPU: implement recompiler gateway function in assembly
Use GHC calling convention directly for SPU object entry points.
This may address performance degradation after #5923.
2019-05-14 22:15:04 +03:00
Nekotekina cc8c635855 SPU: PIC support preview
SPU ASMJIT not supported yet.
Giga mode not supported properly.
2019-05-14 22:15:04 +03:00
eladash 3a5f4ed757 Print SPU Group ID on the debugger 2019-04-20 20:43:58 +01:00
Nekotekina 9060177dbd SPU transactions: add SSE path if AVX is not available
This handles hypothetical situation when AVX is disabled system-wise.
Also refactored register use, to match Windows path with Linux path.
This reduces read set a little at the cost of stack use.
2019-04-16 23:49:18 +03:00
Nekotekina 3354f068fc PPU/SPU transactions: ease cache line interference (TSX path)
Touch memory on the same memory page, but different cache lines.
2019-04-10 13:58:12 +03:00
Nekotekina 71b88cdc82 New SPU interpreter (SPU fast)
Use LLVM to build SPU interpreter.
Simplify interpreter loop.
2019-03-27 20:33:44 +03:00
Nekotekina 4b381fbbb1 Implement spu_runtime::reset
To handle JIT: Out Of Memory error.
2019-03-23 02:43:41 +03:00
Nekotekina f143035af1 Fix sys_spu_thread_group_join wait condition
After waiting, thread group cannot be safely accessed
Following #5643
2019-03-01 00:08:19 +03:00
eladash 0861226271 Make more use of the new atomic_t<>::release 2019-02-10 00:16:57 +03:00
eladash e3ee481f01 Make sys_spu_thread_group_join return once per termination 2019-02-10 00:16:57 +03:00
Nekotekina 50922faac9 Remove SPUThread::jit_dispatcher
Use global array - save memory
Move the array to JIT memory
2019-01-29 03:32:16 +03:00
elad fc92ae4085 SPU/PPU atomics performance and LR event fixes (#5435)
* Fix SPU LR event setting in atomic commands according to hw test
* MFC: increment timestamp for PUT cmd in non-tsx path
* MFC: fix reservation lost test on non-tsx path in regard to the lock bit
* Reservation notification moved out of writer_lock scope to reduce its lifetime
* Use passive_lock/unlock in ppu atomic inctrustions to reduce redundancy
* Lock only once for dma transfers (non-TSX)
* Don't use RDTSC in reservation update logic
* Remove MFC cmd args passing to process_mfc_cmd
* Reorder check_state cpu_flag::memory check for faster unlocking
* Specialization for 128-byte data copy in SPU dma transfers
* Implement memory range locks and isolate PPU and SPU passive lock logic
2019-01-15 18:31:21 +03:00
eladash f19fd23227 spu: Fix support for multiple lists when one is stalled 2019-01-15 02:33:22 +03:00
Nekotekina cfdf50dcff SPU: ensure sys_spu_thread_group_join receives correct exit status
Following #5334
2019-01-13 14:45:36 +03:00
eladash 653a4ef0df Set group status INIT on last thread stopped
this fixes the group status after sys_spu_thread_exit when not joining the spu group
2018-12-25 19:59:41 +03:00
elad 90265edfcd SPU MFC: avoid copying of the lockline onto the stack in putllc/putlluc (#5392) 2018-12-04 21:09:55 +03:00
Nekotekina f442a8a84c SPU TG: add thread group stop counter
Fix possible race condition introduced by waiting on `running` value
2018-11-27 23:37:26 +03:00
Nekotekina febe4d4a10 Implement class cond_x16
Use as reservation notifier
Limited to 16 threads but allows more precise control of contention
2018-11-26 00:23:29 +03:00
Nekotekina 0044eb44e2 Cleanup after #5310 (SPU thread groups)
Move lambda into a cpu_stop()
Use running thread counter to synchronize with sys_spu_thread_group_join()
Use SPU_STATUS_STOPPED_BY_STOP exclusively for sys_spu_thread_exit() as before
Remove unnecessary waiting in sys_spu_thread_group_exit()
Rollback some minor unnecessary changes
Use shared_mutex in SPU TG
2018-11-14 12:50:24 +03:00
RipleyTom 0e0a82e536 Ensures threads are stopped in join 2018-11-13 10:19:28 +03:00
eladash 2e1aec4de8 Implement sys_spu_thread_tryreceive_event 2018-11-12 21:12:33 +03:00
Nekotekina 488928eca2 Fix SPU STOP instruction
Check thread state after STOP instruction
2018-11-05 14:35:50 +03:00
Nekotekina f06e6be2c1 Disable npc update for SPU thread groups 2018-11-05 13:14:11 +03:00
Nekotekina 1b37e775be Migration to named_thread<>
Add atomic_t<>::try_dec instead of fetch_dec_sat
Add atomic_t<>::try_inc
GDBDebugServer is broken (needs rewrite)
Removed old_thread class (former named_thread)
Removed storing/rethrowing exceptions from thread
Emu.Stop doesn't inject an exception anymore
task_stack helper class removed
thread_base simplified (no shared_from_this)
thread_ctrl::spawn simplified (creates detached thread)
Implemented overrideable thread detaching logic
Disabled cellAdec, cellDmux, cellFsAio
SPUThread renamed to spu_thread
RawSPUThread removed, spu_thread used instead
Disabled deriving from ppu_thread
Partial support for thread renaming
lv2_timer... simplified, screw it
idm/fxm: butchered support for on_stop/on_init
vm: improved allocation structure (added size)
2018-10-19 22:22:35 +03:00
Nekotekina a8a8cd88a0 Implement lf_queue<>, lf_value<>
lf_queue<>: unbound FIFO queue with dynamic linked-list
lf_value<>: concurrently-assignable value readable without locking at the cost of memory (using dynamic linked list)

Add atomic_t<>::compare_exchange
2018-09-27 12:16:43 +03:00
eladash 05cd3712a3 spu: Fix MMIO index checking 2018-09-17 17:24:54 +03:00
scribam d7bb59cd99 c++17: use std::size 2018-09-06 13:15:59 +03:00
Nekotekina ca5158a03e Cleanup semaphore<> (sema.h) and mutex.h (shared_mutex)
Remove semaphore_lock and writer_lock classes, replace with std::lock_guard
Change semaphore<> interface to Lockable (+ exotic try_unlock method)
2018-09-03 23:00:36 +03:00
Nekotekina 8abe6489ed Mega-cleanup for atomic_t<> and named bit-sets bs_t<>
Remove "atomic operator" classes
Remove test, test_and_set, test_and_reset, test_and_complement global functions
Simplify atomic_t<> with constexpr if, remove some garbage
Redesign bs_t<> to use class, mark its methods constexpr
Implement atomic_bs_t<> for optimizations
Remove unused __bitwise_ops concept (should be in other header anyway)
Bitsets can now be tested via safe bool conversion
2018-09-03 21:40:36 +03:00
eladash f349695a75 Rsx: rewrite address translation 2018-08-13 16:16:34 +03:00
eladash ac99fd764d mfc: clamp atomic cmd's addr 2018-07-14 01:40:40 +04:00
eladash a5d4e58ddd fix barrier type mfc transfers 2018-07-10 14:54:51 +04:00
Nekotekina 20900ebea2 SPU: rename block stats
Use Block Weight and Retreats
2018-07-06 00:33:52 +03:00
Nekotekina a0c0d8b993 SPU: simplify unimplemented event check
Move checks closer to the actual use
2018-07-06 00:33:52 +03:00
Nekotekina e4da284176 SPU: analyser v4 and fixes
Build SPU cache after PPU, fix mixing progress
SPU ASMJIT: add support for Giga mode
SPU ASMJIT: use the same spu.log location as SPU LLVM
SPU: improve spu.log disasm
SPU: improve trampolines, unify with SPU ASMJIT
SPU: decode interrupt handler address from BR/BRA at 0x0
SPU LLVM: support Mega/Giga modes
SPU LLVM: implement function chunks
SPU LLVM: use PHI nodes, value visibility across basic blocks
SPU LLVM: implement function chunk table
New simple memory manager for LLVM (bugfix)
2018-06-21 22:29:34 +03:00
eladash af62c92b7f mfc: clamp list transfer size 2018-06-17 23:20:00 +04:00
Nekotekina eb081bbcfa SPU: add 'Accurate PUTLLUC' option
Effectively rollback previous PUTLLUC accuracy commit by default
Minor changes in GETLLAR/PUTLLUC transactions
2018-06-12 02:09:22 +03:00
Nekotekina 12eee6a19e SPU ASMJIT: Implement Mega block mode (experimental)
Disable extra modes for SPU LLVM for now.
In Mega mode, SPU Analyser tries to determine complete functions.
Recompiler tries to speed up returns via 'stack mirror'.
2018-06-05 12:35:26 +03:00
Nekotekina ec6d1fb1ba SPU: optimize GETLLAR (TSX)
Add an option "Accurate GETLLAR"
2018-06-05 12:35:26 +03:00