Nekotekina
65c04e4ddd
Remove constexpr from ppu/spu decoders.
...
We don't need them at compile time (yet).
But can reduce compile time and complexity.
2020-12-10 15:06:01 +03:00
Nekotekina
b382d3b3e9
Remove ASSUME macro
...
It's dangerous and sometimes bluntly misused feature.
Its optimization potential is near-zero.
2020-12-10 14:08:02 +03:00
Nekotekina
36c8654fb8
Remove HERE macro
...
Some cleanup.
Add location to some functions.
2020-12-10 12:30:22 +03:00
Nekotekina
e055d16b2c
Replace verify() with ensure() with auto src location.
...
Expression ensure(x) returns x.
Using comma operator removed.
2020-12-09 15:43:38 +03:00
Nekotekina
eb66302907
atomic.hpp: replace std::atomic with atomic_t
...
Dual dependency is nothing good.
2020-12-07 17:13:12 +03:00
Nekotekina
b16cc618b5
atomic.hpp: add some features and optimizations
...
Add atomic_t<>::observe() (relaxed load)
Add atomic_fence_XXX() (barrier functions)
Get rid of MFENCE instruction, replace with no-op LOCK OR on stack.
Remove <atomic> dependence from stdafx.h and relevant headers.
2020-12-07 17:13:12 +03:00
Nekotekina
77aa9e58f2
shared_ptr.hpp: add trivial conversion for shared/single types
...
These conversions don't exist in std::shared_ptr-alike types.
But I don't want to bother with == operators until we have proper C++20.
Removed trivial conversion for atomic_ptr because it's heavyweight.
2020-12-07 15:33:28 +03:00
Eladash
15a12afe25
Debugger: Implement code flow tracking
2020-12-06 15:32:13 +03:00
RipleyTom
af8c661a64
Remove BOM markers
2020-12-06 15:30:12 +03:00
Nekotekina
b5d498ffda
Homebrew atomic_ptr rewritten (util/shared_ptr.hpp)
...
It's analogous to C++20 atomic std::shared_ptr
The following things brought into global namespace:
single_ptr
shared_ptr
atomic_ptr
make_single
2020-11-26 20:11:26 +03:00
Nekotekina
43952e18e2
Implement prefetch_write() and prefetch_exec() wrappers
...
Do some refactoring to prefetch_read() in util/asm.hpp as well.
Make all these function constexpr because they are no-ops.
2020-11-24 12:31:11 +03:00
Eladash
3f028fbb83
Fix SPU LS MMIO
2020-11-24 03:44:30 +03:00
Eladash
85880ffded
SPU: Log STOP full opcode ( #9292 )
...
Co-authored-by: Ani <ani-leo@outlook.com>
2020-11-20 13:53:16 +03:00
Nekotekina
0fec99e75b
SPU: absolutely unacceptable hack for SPU LS
...
Make normal threads inaccessible in PS3 memory.
2020-11-17 15:22:04 +03:00
Nekotekina
eaf0bbc108
SPU: don't allocate SPU LS in vm::main
...
Create its own shared memory object.
Use vm::spu to allocate all SPU types.
Use vm::writer_lock for shm::map_critical.
2020-11-16 12:46:15 +03:00
Eladash
b1710bb712
SPU Debugger: Implement float registers view + General debugger fixes ( #9265 )
...
* SPU Debugger: Fix try_get_insert_mask_info
* Debugger: Always update thread state on context's data change
No longer needing to press on thread's instructions for actions to work!
2020-11-15 08:45:28 +03:00
Nekotekina
dfae7bd073
SPU: Fix some stat printing
2020-11-15 04:41:16 +03:00
Nekotekina
badb3dc2dd
atomic.cpp/threads: remove old wait callback
...
Add new wait callback which simply collects statistics.
Shift workarounds towards actual problem detection.
2020-11-14 18:16:27 +03:00
Eladash
fefab50e06
Fix vm::range_lock, imporve vm::check_addr
2020-11-11 10:30:09 +03:00
Eladash
74274f6d77
Debugger: Improve SPU/PPU vector registers
2020-11-11 10:27:45 +03:00
Eladash
52fa69d93d
SPU Debugger: Improve registers panel
2020-11-10 22:51:52 +03:00
Nekotekina
cdaa8cb5c4
CPU: Improve suspend_all g_suspend_counter handling
...
Increase in two stages, giving more chances to use it.
Second stage is when all wait flags have been seen.
2020-11-09 23:54:36 +03:00
Nekotekina
083397a555
vm: lock memory under "sudo" addr
...
Remove memory touching from transactions.
2020-11-09 23:54:36 +03:00
Nekotekina
bc61835d97
CPU: use unsigned (u8) priority in suspend_all
2020-11-09 22:57:36 +03:00
Nekotekina
8bc9868c1f
SPU/vm: Improve vm::range_lock a bit
...
Use some prefetching
Use optimistic locking
2020-11-08 17:23:17 +03:00
Nekotekina
3507cd0a37
SPU: improve spu_thread::reservation_check
...
Use optimistic locking and optimistic loop (expecting 1 iteration).
2020-11-08 16:43:15 +03:00
Nekotekina
1c99a2e7fb
vm: add map_self() method to utils::shm
...
Add complementary unmap_self() method.
Move VirtualMemory to util/vm.hpp
Minor associated include cleanup.
Move asm.h to util/asm.hpp
2020-11-08 16:43:15 +03:00
Nekotekina
b68bdafadc
vm: refactor vm::range_lock again
...
Move bits to the highest, set RWX order.
Use only one reserved value (W = locked).
Assume lock size 128 for range_locked.
Add new "Size" template argument that replaces normal argument.
2020-11-08 16:43:15 +03:00
Eladash
bacfa9be19
Debugger fixups ( #9226 )
...
Fix logic error in callstacks handling code, always set first to false after first iteration.
Add explicit check for zero return addresses. Current code validity checks may not check for it properly when it sits on interrupt handler entry point (which may contain valid code).
Do not allow 0x3FFF0 to be a back chain address because it needs space for LR save area, only 0x3FFE0 and below satisfy this criteria.
2020-11-08 16:42:20 +03:00
Eladash
516da4ecdd
Debugger: Improve SPU/PPU callstack handling
2020-11-08 09:17:13 +03:00
Eladash
3cb5fd8ebc
Debugger: Implement SPU callstack, fix PPU callstack
2020-11-07 20:45:57 +03:00
Eladash
6dcd482dd0
SPU reservations: Do not illegally dereference reservation data
2020-11-07 14:03:09 +03:00
Nekotekina
34fa010601
Improve cond_var notifiers
...
But nobody uses it anyway, so clean up includes.
2020-11-06 00:10:16 +03:00
Nekotekina
5248240e10
atomic.cpp: improvements.
...
Reduced static memory amount for waitable atomics.
Allow notifier to skip notifications if wait/notify masks don't overlap.
Improve raw_notify to wake up the thread by its id, add thread_id arg.
Add optional mask argument to notify_one() and notify_all().
2020-11-05 05:51:43 +03:00
Nekotekina
06ecc2ae68
vm::range_lock cleanup and minor optimization
...
Removed unused arg.
Linearized some branches.
2020-11-01 23:29:33 +03:00
Nekotekina
46d3066c62
Optimize vm::range_lock
...
Only test address on `range_locked`
Don't check current transaction
Remove vm::clear_range_locks completely
2020-11-01 16:46:06 +03:00
Nekotekina
8d12816001
TSX: fix transaction limit settings
2020-11-01 14:58:04 +03:00
Nekotekina
fe03b55046
TSX: tiny optimization of transaction functions
...
Because new memory manager puts them in first 2G.
2020-11-01 14:44:59 +03:00
Nekotekina
78c986b5dd
Improve vm::range_lock
...
Not sure how it ever worked
Clear redundant vm::clear_range_lock usage
2020-10-31 23:53:14 +03:00
Nekotekina
86fc842c89
TSX: new fallback method (time-based)
...
Basically, using timestamp counter.
Rewritten vm::reservation_op with the same principle.
Rewritten another transaction helper.
Add two new settings for configuring fallbacks.
Two limits are specified in nanoseconds (first and second).
Fix PUTLLC reload logic (prevent reusing garbage).
2020-10-31 15:34:14 +03:00
Nekotekina
150e18539c
Allow cpu_thread& arg passed to the syscalls
...
Minor cleanup. cpu_mem(), cpu_unmem() removed.
2020-10-30 17:03:32 +03:00
Nekotekina
3419d15878
vm: add extern clear_range_locks function
...
Allows to wait for range locks to clear for specified range.
vm::range_lock now monitors specified reservation lock as well.
2020-10-30 07:58:16 +03:00
Nekotekina
0da24f21d6
CPU: improve cpu_thread::suspend_all for cache efficiency (TSX)
...
Add prefetch hint list parameter.
Workloads may be executed by another thread on another CPU core.
It means they may benefit from directly prefetching the data as hinted.
Also implement mov_rdata_nt, for "streaming" data from such workloads.
2020-10-30 05:22:09 +03:00
Nekotekina
e794109a67
perf_meter.hpp: fix double rdtsc bug, add restart() method
2020-10-30 03:19:13 +03:00
Nekotekina
006c783aba
SPU: make do_dma_transfer() static with _this arg
...
Instead of this, nullptr will be passed from another thread.
MMIO via MMIO is disabled if it even possible.
2020-10-30 02:58:39 +03:00
Nekotekina
425fce5070
SPU: load previous data on PUTLLC failure
...
Since it will most likely execute GETLLAR to load it again.
Only implemented for TSX at moment.
2020-10-30 02:58:39 +03:00
Nekotekina
8ce0819b42
SPU: add stx/ftx counters
...
Just count pure transaction successes and failures.
2020-10-29 18:57:57 +03:00
Nekotekina
688a456642
TSX tweaks
...
Allow to do more in first-chance transactions.
Give PUTLLC +1 priority (minor change).
2020-10-29 18:57:57 +03:00
Nekotekina
280958ee74
Revert "TSX: adjust transaction logic"
...
This reverts commit ff550b5c3c .
2020-10-28 21:59:12 +03:00
Nekotekina
ff550b5c3c
TSX: adjust transaction logic
...
Allow more in first-chance transactions.
Allow abandonment of PUTLLC as in original path.
Make PUTLLUC unconditionally shared-locked.
Give PUTLLC +1 priority (minor change).
2020-10-28 14:00:09 +03:00
Nekotekina
d6daa0d05b
Fix cpu_flag::temp, make sure it removes cpu_flag::wait
2020-10-28 14:00:09 +03:00
Nekotekina
c491b73f3a
SPU: improve accurate DMA
...
Remove vm::reservation_lock from it.
Use lock bits to prevent memory clobbering in GETLLAR.
Improve u128 for MSVC since it's used for bitlocking.
Improve 128 bit atomics for the same reason.
Improve vm::reservation_op and friends.
2020-10-28 03:47:41 +03:00
Nekotekina
c50233cc92
atomics.cpp: add support for waiting on 128-bit atomics
...
Complementarily.
Also refactored to make waiting mask non-template arg.
2020-10-28 03:47:41 +03:00
Nekotekina
4966f6de73
vm: improve range_lock and shareable cache (Non-TSX)
...
Allocate "personal" range lock variable for each spu_thread.
Switch from reservation_lock to range lock for all stores.
Detect actual memory mirrors in shareable cache setup logic.
2020-10-27 17:56:19 +03:00
Nekotekina
f1e66085cd
Fixup for cpu_flag::temp
...
Wrong check_state() result was triggering assertion.
2020-10-26 01:18:26 +03:00
Nekotekina
130a0ef20e
Implement cpu_flag::temp flag
...
Accompanies wait flag, indicating that it was set in limited conditions.
Such condition don't allow thread to terminate after its removal.
2020-10-25 21:48:20 +03:00
kd-11
18ca3ed449
rsx: Block-level reservation access
2020-10-25 20:21:04 +03:00
Eladash
4ea7628204
SPU: Fix LS capture entry point
2020-10-25 16:39:40 +03:00
Nekotekina
2b52b4a749
SPU: use normal notify() thread function
...
Using raw_notify() everywhere was overkill.
2020-10-24 14:16:32 +03:00
Eladash
49610f52f5
SPU: Save LS capture executable in one segment
2020-10-24 14:13:19 +03:00
Eladash
b56bc7e087
SPU: cleanup channels logging
2020-10-23 13:13:04 +03:00
Nekotekina
dc8252bb9f
Remove XABORT in PPU/SPU transactions.
...
It's expensive for unknown reason. Simply XEND is usually much cheaper.
Add some minor improvements. Use g_sudo_addr.
2020-10-20 09:10:21 +03:00
Nekotekina
72d1ac22aa
SPU: report too many PUTLLC attempts (TSX)
...
Mirrored to PPU STCX code and PUTLLUC (STORE128).
2020-10-19 19:41:28 +03:00
Nekotekina
8ce5392390
TSX: add prefetchw instruction in transaction code
2020-10-19 19:41:28 +03:00
Nekotekina
311682b341
SPU: fix GETLLAR regression
...
Misplaced mov_rdata
2020-10-19 19:41:28 +03:00
Nekotekina
120849c734
Implement perf stat counter for PPU/SPU reservation ops
...
Adds Emu/perf_meter.hpp header file.
Uses RDTSC for speed.
Prints stats at exit.
2020-10-19 19:41:28 +03:00
Nekotekina
adf50b7c4b
Implement cpu_thread::if_suspended
...
Use it for opportunistic guaranteed GETLLAR execution (TSX-FA).
2020-10-18 20:10:48 +03:00
Nekotekina
f5c575961f
Implement priorities for cpu_thread::suspend_all tasks
...
Give PUTLLUC increased priority.
2020-10-18 20:10:48 +03:00
Eladash
402e8b12a6
SPU: Touch unmapoed memory in reservation mismatch
2020-10-18 11:42:54 +03:00
Nekotekina
d0057c92e4
Fix spu_putlluc_tx (insignificant)
2020-10-17 21:27:19 +03:00
Nekotekina
583ed61712
SPU: return some give-up behaviour for PUTLLC (TSX)
...
Despite using concept of "shared" lock, allow only first to proceed.
This is similar how conditional stores for PPU are implemented.
2020-10-16 12:14:42 +03:00
Nekotekina
facde63460
PPU: fix ppu_stcx_accurate_tx
...
Don't destroy xmm6/xmm7 state on exit.
Improve addr arg handling (simplify).
2020-10-15 19:24:00 +03:00
Nekotekina
494953997e
PPU/SPU: give up on conditional stores if locking fails
...
Restores Non-TSX behaviour partially.
2020-10-15 17:18:49 +03:00
Nekotekina
1b89ad00e7
SPU: restore some LR event setting logic after #9048
2020-10-15 17:18:49 +03:00
Nekotekina
3bddba0c7a
SPU: fix spu_getllar_tx
...
Was not executing.
2020-10-14 02:53:29 +03:00
Nekotekina
97cd641da9
TSX: reimplement spu_getllar_tx
...
Only used as a backup method of reading reservation data.
Increase long GETLLAR reporting threshold.
2020-10-13 21:10:04 +03:00
Nekotekina
91db4b724c
SPU: fix PUTLLC (TSX-FA)
...
Some forgotten checks may affect performance.
2020-10-13 17:46:03 +03:00
Nekotekina
dcff8c2637
Fix remaining vm::reservation_lock usages (for now)
...
Optimization can be restored later.
2020-10-13 12:04:59 +03:00
Nekotekina
dc39a9b84f
SPU: Report 'GETLLAR took too long'
...
Also move similar code in PPU.
2020-10-13 00:12:11 +03:00
Nekotekina
a806be8bc4
SPU: Implement S1/S2 (SNR) events ( closes #8789 )
...
Add TSX path in push_snr()
Add locks bits in ch_events
2020-10-12 21:41:57 +03:00
Eladash
95c1443e30
SPU: Validate reservation in GET commands (Accurate DMA) ( #9062 )
2020-10-12 15:20:06 +03:00
Nekotekina
5bd5a382c0
PPU: fix LDARX/LWARX in accurate mode ( closes #9058 )
...
Fixup after #9048
Use SSE intrinsics in mov_rdata.
2020-10-11 19:52:10 +03:00
Nekotekina
050c3e1d6b
Rewrite cpu_thread::suspend_all
...
Now it's a function of higher order.
Make only one thread do the hard work of thread pausing.
2020-10-10 13:58:48 +03:00
Nekotekina
346a1d4433
vm: rewrite reservation bits
...
Implement classic unique/shared locking concept.
Implement vm::reservation_light_op.
2020-10-10 13:58:48 +03:00
Eladash
865464f607
SPU Local Storage capture
2020-10-08 19:05:14 +03:00
Eladash
493e57837b
SPU: Fix extremely rare bug of GETLLAR ( #9011 )
2020-10-03 09:31:28 +01:00
Eladash
56cebd99c2
SPU: Simplify logging of MFC commands ( #9004 )
2020-10-01 19:52:39 +03:00
Eladash
f4ca6f02a1
PPU: Implement support for 128-byte reservations coherency
2020-09-28 22:34:42 +03:00
Eladash
09cddc84be
SPU/PPU: Implement Atomic Cache Line Stores
2020-09-27 20:09:21 +03:00
Eladash
bfa78870cb
SPU: Fix unregistered channels in RCHCNT
...
Shouldn't throw exception on realhw.
2020-09-22 19:47:47 +03:00
Eladash
ad37259ccc
SPU: Implement many missing channel counts
2020-09-22 19:47:47 +03:00
Eladash
c436ef0c6f
SPU: Implement channels 70, 71, add naming for channel 69 ( #8932 )
2020-09-19 13:08:35 +01:00
eladash
36ac68b436
SPU: Implement events channel count, minor interrupts fixes
2020-09-18 21:57:24 +03:00
Eladash
3206378ae6
sys_spu: Fix overexecution of cpu_return()
2020-09-12 22:11:40 +03:00
Eladash
7ce790f369
SPU: Use ASM for AVX2 coompilation instead of intrinsics
2020-09-12 18:49:49 +03:00
Eladash
9ff0b460a2
SPU: Make PUTLLUC LR event accurate
2020-09-11 09:02:18 +02:00
Eladash
4f0125a0e9
SPU: Remove "Accurate PUTLLUC" setting (always accurate)
2020-09-11 09:02:18 +02:00
Eladash
1e4655aef6
SPU: Remove STOP 0x0 hack ( #8873 )
2020-09-09 11:36:04 +01:00
Eladash
5060c779da
SPU: Use unaligned instructions in mov_rdata_avx (MSVC) ( #8851 )
2020-09-07 21:32:44 +01:00
Eladash
abc715bc5c
SPU: Make PUT transfers use SEQ-CST ordering on Accurate DMA ( #8844 )
2020-09-06 12:09:14 +01:00
Eladash
2688081656
SPU: Use unaligned AVX instructions for cmp_data_avx
2020-09-06 07:51:12 +03:00
Eladash
e4abd3dc5a
SPU: Do not ignore pending PUT tranfers just becase GET may not be cache-line atomic
...
This is not the proper way to emulate non-atomic GET tranfers, as it makes it seems as if PUT atomic tranfers arent atomic. (TODO)
Incomplete GET cache line accesses still do not verify data though.
2020-09-05 22:23:55 +03:00
Eladash
c7a185d4e7
SPU: Fix not acuiring reservation locks on DMA with more than one cache line (Accurate DMA)
2020-09-05 22:23:55 +03:00
Eladash
4ffc58a8ce
SPU: Cleanup for Accurate PUTLLUC
...
Should no longer affect GET commands because Accurate DMA is available for this functionality.
2020-09-04 10:20:44 +02:00
Eladash
c5c9ea1b21
SPU: Make GET's full and aligned cache line accesses atomic with Accurate DMA
2020-09-04 10:20:44 +02:00
Eladash
73d23eb6e6
SPU: Implement Accurate DMA ( #8822 )
2020-09-02 23:58:29 +02:00
Nekotekina
ebc4a0188a
Restore some code
2020-08-28 01:54:39 +03:00
Eladash
47b545282e
SPU: Fix events ACK, minor optimizations ( #8771 )
2020-08-27 21:36:54 +01:00
Eladash
841b8fad38
SPU: Fix timer events
2020-08-24 01:57:32 +03:00
Eladash
0f8ca1f7c5
SPU: Implement RSX accurate reservations on TSX ( #8721 )
2020-08-13 00:00:37 +01:00
Eladash
e52dd9dc6f
SPU: Implement SYS_SPU_THREAD_OPTION_DEC_SYNC_TB_ENABLE ( #8657 )
2020-07-30 14:01:25 +01:00
Eladash
82068cf802
SPU: Fix spu_thread::cpu_stop() missed executions ( #8656 )
2020-07-30 10:07:18 +01:00
Eladash
a029a94c73
SPU: Use waitable atomics for SPU channels interface
2020-07-23 13:45:58 +03:00
Eladash
f8d2d8ca11
SPU/Windows: Fix LS memory mirrors
...
This is a workaround but this is because of how utils::shm works on Windows path.
2020-07-19 17:58:49 +03:00
Eladash
c37bc3c55c
SPU: Make spu_thread::offset private
2020-07-19 17:58:49 +03:00
Eladash
af1ceb1151
SPU LLVM: LS Memory Mirrors (Optimize loads/stores)
2020-07-18 02:01:33 +03:00
Eladash
235d12aa6b
SPU MFC: Never clear tag status in WrTagUpdate
2020-07-10 02:52:02 +03:00
Eladash
5d1fc546a8
SPU MFC: Fix MFC_WrTagUpdate channel count
...
Always report available, in realhw this is just a hint if the previous tag update hasnt been checked yet by the MFC, avoiding blocking writes and allowing the SPU to execute some code while it processes the previous update request.
Except for MFC_TAG_UPDATE_IMMEDIATE, where it also waits for MFC to process it.
2020-07-10 02:52:02 +03:00
Eladash
84470c34db
SPU: Disable PUTLLC NOP transfers detection on TSX path
2020-07-09 03:17:35 +01:00
Eladash
f8dbfa1d1e
SPU: Implement GETLLAR polling detection
2020-07-09 03:17:35 +01:00
Eladash
dc25a3fa2a
PPU debugger: Show stack address of each function
2020-07-06 18:58:16 +02:00
Eladash
2c93fecd8b
SPU: Use named constants for MFC tag updates
2020-06-27 20:42:41 +01:00
Eladash
f29589e5cf
SPU debugger: Add atomic status and tag update channels information
2020-06-27 07:04:37 +01:00
Eladash
731d4330fe
v128: A few optimizations ( #8432 )
2020-06-15 17:24:04 +03:00
Eladash
5777a1d426
SPU: Implement EBUSY error on non-empty mailbox (sys_spu_thread_send_event/sys_event_flag_set_bit)
...
Write into inbound mailbox under mutex.
2020-06-15 17:08:57 +03:00
Eladash
c15b5f1eca
SPU: Move check_state() outside of mutex scope
...
Can result in a deadlock in some cases, cpu flags are checked after this function as well anyways.
2020-06-15 17:08:57 +03:00
sampletext32
437f374bae
Fix some checks
2020-06-04 19:48:08 +03:00
Nekotekina
abf9a08ee3
Fix warnings
2020-05-27 18:41:17 +03:00
Eladash
7c3166a0c6
SPU MFC: Fix MFC_WrListStallAck on interpreters
2020-05-20 22:55:30 +03:00
Eladash
4405f46aec
SPU MFC: Fix SN interrupts
2020-05-20 22:55:30 +03:00
Eladash
81684919f5
SPU MFC: Implement MFC_SDCRZ_CMD
2020-05-20 22:55:30 +03:00
Eladash
a2653532ef
SPU reservations (TSX): Remove wait flag in PUTQLLUC
2020-05-17 14:20:21 +03:00
Eladash
61f43d78df
SPU: Minor cleanup of exception in stop_and_signal
2020-05-14 16:58:50 +01:00
Eladash
54dd9f4eae
sys_spu: Fix sys_spu_thread_group_terminate vs sys_spu_thread_group_exit race on values
2020-05-14 16:58:50 +01:00
Eladash
9266507e4c
SPU: Implement spu_channel_(4_)t::try_read
2020-05-13 19:36:44 +03:00
Eladash
7ff25588f4
sys_spu: Minor cleanup of group termination process
2020-05-13 19:36:44 +03:00
Eladash
525453794f
SPU/PPU reservations: Optimizations part 1
...
- Implement vm::reservation_trylock, optimized locking on reservation stores with no waiting. Always fail if reservation lock bitsa are set.
- Make SPU accurate GET transfers on non-TSX not modify reservation lock bits.
- Add some optimization regarding to unmodified data reservations writes.
2020-05-13 11:10:13 +01:00
Eladash
f95b81574f
sys_spu: Fix race in sys_spu_thread_group_destroy and other minor fixes ( #8182 )
...
* sys_spu: Fix race in sys_spu_thread_group_destroy and other minor fixes
* SPU: Wait for all threads to have error codes if exited by sys_spu_thread_exit
On last thread in group to run.
* sys_spu: Fix sys_spu_thread_group_start
* fixup ad fix sys_spu_thread_group_terminate
idk why "- !group->running" was put in the first place but its probably no longer relevant due to other changes and was causing other issues such as not always waiting for last SPU thread to set group state to INITIALIZED.
2020-05-11 21:24:04 +03:00
Eladash
09797c3584
sys_spu: Improve sys_spu_thread_get_exit_status
2020-05-10 03:46:11 +01:00
Eladash
1bd6cb2105
SPU/PPU debugger: use ':' instead of '='
2020-05-05 13:46:26 +03:00
Eladash
edde748519
sys_event_queue: Fix forced event queue destruction
...
Add missing last existence check at sys_spu_thread_(try)receive_event and lv2_event_queue::send.
2020-05-04 01:10:19 +03:00
Eladash
0e6937a359
SPU GETLLAR: Don't use loop detection for TSX
2020-05-02 14:57:38 +03:00
Eladash
2b75df22d9
sys_event_queue: Fix ports disconnection after queue destruction
2020-04-30 18:58:42 +03:00
Nekotekina
19acf260d8
SPU DMA: Fix PUTLLUC (TSX)
...
Prevent edge case of missing store.
2020-04-29 15:40:41 +03:00
Eladash
954e3f6e6c
Fixup for cpu_flag::pause state check after #8114
2020-04-29 05:56:47 +03:00
Eladash
a505d87565
Partial revert of 3be687cd18
2020-04-28 20:20:19 +03:00
Nekotekina
790fd9ce14
SPU DMA: implement cmp_rdata_avx
...
Use technique similar to mov_rdata_avx with inline assembly.
2020-04-28 17:58:26 +03:00
Eladash
7da8ba5c15
Wait for SPU event to be received in sys_spu_thread_group_exit
...
Also use atomic check for group->run_state outside the mutex,
explicitly forbid group termination while we are waiting for an event to be received.
2020-04-28 14:58:17 +03:00
Eladash
a9c18964b6
Add missing cpu state check sys_spu_thread_receive_event
2020-04-28 14:58:17 +03:00
Eladash
9506676223
SPU debugger: dump Stall Stat and SRR0
2020-04-28 14:27:40 +03:00