* Update asmjit dependency (aarch64 branch)
* Disable USE_DISCORD_RPC by default
* Dump some JIT objects in rpcs3 cache dir
* Add SIGILL handler for all platforms
* Fix resetting zeroing denormals in thread pool
* Refactor most v128:: utils into global gv_** functions
* Refactor PPU interpreter (incomplete), remove "precise"
* - Instruction specializations with multiple accuracy flags
* - Adjust calling convention for speed
* - Removed precise/fast setting, replaced with static
* - Started refactoring interpreters for building at runtime JIT
* (I got tired of poor compiler optimizations)
* - Expose some accuracy settings (SAT, NJ, VNAN, FPCC)
* - Add exec_bytes PPU thread variable (akin to cycle count)
* PPU LLVM: fix VCTUXS+VCTSXS instruction NaN results
* SPU interpreter: remove "precise" for now (extremely non-portable)
* - As with PPU, settings changed to static/dynamic for interpreters.
* - Precise options will be implemented later
* Fix termination after fatal error dialog
Rewritten the following global utility constants:
`umax` returns max number, restricted to unsigned.
`smax` returns max signed number, restricted to integrals.
`smin` returns min signed number, restricted to signed.
`amin` returns smin or zero, less restricted.
`amax` returns smax or umax, less restricted.
Fix operators == and <=> for synthesized rel-ops.
Add atomic_t<>::observe() (relaxed load)
Add atomic_fence_XXX() (barrier functions)
Get rid of MFENCE instruction, replace with no-op LOCK OR on stack.
Remove <atomic> dependence from stdafx.h and relevant headers.
Notification with "phantom value" could be useful in theory.
As an alternative way of sending signal from the notifier.
But current implementation is just useless.
Also fixed slow_mutex (not used anywhere).
Static initialization is all-zeros anyway.
But constexpr was killing my intellisense.
And probably also affected compile time.
Also make some internal structures hidden ("static").
Billions of events can reduce to thousands, saving CPU time a few s.
Only 4 slots are available (arbitrarily), and only 1 is usually used.
Other slots are used only for waiting on multiple atomics.
Takes ~80 ns instead of ~40 ns, same about deallocation.
Loops don't exist here, only 4-level semaphore tree.
Worst case only happens with concurrence, not from looping.
This optimization is not really necessary at current state of RPCS3.
This is more like to test C++ compilers and MSVC u128 implementation.
Remove unnecessary optimization from cond_alloc().
Optimistic case was absolutely dominating anyway.
Although the whole function is a dirty hack.
Now scanning through all threads is faster.