* Update asmjit dependency (aarch64 branch)
* Disable USE_DISCORD_RPC by default
* Dump some JIT objects in rpcs3 cache dir
* Add SIGILL handler for all platforms
* Fix resetting zeroing denormals in thread pool
* Refactor most v128:: utils into global gv_** functions
* Refactor PPU interpreter (incomplete), remove "precise"
* - Instruction specializations with multiple accuracy flags
* - Adjust calling convention for speed
* - Removed precise/fast setting, replaced with static
* - Started refactoring interpreters for building at runtime JIT
* (I got tired of poor compiler optimizations)
* - Expose some accuracy settings (SAT, NJ, VNAN, FPCC)
* - Add exec_bytes PPU thread variable (akin to cycle count)
* PPU LLVM: fix VCTUXS+VCTSXS instruction NaN results
* SPU interpreter: remove "precise" for now (extremely non-portable)
* - As with PPU, settings changed to static/dynamic for interpreters.
* - Precise options will be implemented later
* Fix termination after fatal error dialog
- Try to use REP MOVSB when the size of the transfer is above a certain threshold
- This threshold is determined by the ERMS and FSRM cpuid flags
- The threshold values are (roughly) taken from GLIBC
- A threshold of 0xFFFFFFFF indicates that the cpu has neither flag
Rewritten the following global utility constants:
`umax` returns max number, restricted to unsigned.
`smax` returns max signed number, restricted to integrals.
`smin` returns min signed number, restricted to signed.
`amin` returns smin or zero, less restricted.
`amax` returns smax or umax, less restricted.
Fix operators == and <=> for synthesized rel-ops.
cmp_rdata and mov_rdata are using __attribute__((always_inline)),
without inline, that is not supported on current g++ (see RPCS3#1546).
Moreover __attribute__((always_inline)) is a noop if used without inline so
just remove it.
A proper fix is to move the 2 functions in an header file as static
(with FORCE_INLINE) so it can be correctly inlined by the compiler.
* Use atomic waitables instead instead of global thread wait as often as possible.
* Add ::is_stopped() and and ::is_paued() which can be used in atomic loops and with atomic wait. (constexpr cpu flags test functions)
* Fix notification bug of sys_spu_thread_group_exit/terminate. (old bug, enhanced by #9117)
* Function time statistics at Emu.Stop() restored. (instead of current "X syscall failed with 0x00000000 : 0")
Add atomic_t<>::observe() (relaxed load)
Add atomic_fence_XXX() (barrier functions)
Get rid of MFENCE instruction, replace with no-op LOCK OR on stack.
Remove <atomic> dependence from stdafx.h and relevant headers.
These conversions don't exist in std::shared_ptr-alike types.
But I don't want to bother with == operators until we have proper C++20.
Removed trivial conversion for atomic_ptr because it's heavyweight.
* SPU Debugger: Fix try_get_insert_mask_info
* Debugger: Always update thread state on context's data change
No longer needing to press on thread's instructions for actions to work!
Move bits to the highest, set RWX order.
Use only one reserved value (W = locked).
Assume lock size 128 for range_locked.
Add new "Size" template argument that replaces normal argument.
Fix logic error in callstacks handling code, always set first to false after first iteration.
Add explicit check for zero return addresses. Current code validity checks may not check for it properly when it sits on interrupt handler entry point (which may contain valid code).
Do not allow 0x3FFF0 to be a back chain address because it needs space for LR save area, only 0x3FFE0 and below satisfy this criteria.
Reduced static memory amount for waitable atomics.
Allow notifier to skip notifications if wait/notify masks don't overlap.
Improve raw_notify to wake up the thread by its id, add thread_id arg.
Add optional mask argument to notify_one() and notify_all().
Basically, using timestamp counter.
Rewritten vm::reservation_op with the same principle.
Rewritten another transaction helper.
Add two new settings for configuring fallbacks.
Two limits are specified in nanoseconds (first and second).
Fix PUTLLC reload logic (prevent reusing garbage).
Add prefetch hint list parameter.
Workloads may be executed by another thread on another CPU core.
It means they may benefit from directly prefetching the data as hinted.
Also implement mov_rdata_nt, for "streaming" data from such workloads.