Replaces `std::shared_pointer` with `stx::atomic_ptr` and `stx::shared_ptr`.
Notes to programmers:
* This pr kills the use of `dynamic_cast`, `std::dynamic_pointer_cast` and `std::weak_ptr` on IDM objects, possible replacement is to save the object ID on the base object, then use idm::check/get_unlocked to the destination type via the saved ID which may be null. Null pointer check is how you can tell type mismatch (as dynamic cast) or object destruction (as weak_ptr locking).
* Double-inheritance on IDM objects should be used with care, `stx::shared_ptr` does not support constant-evaluated pointer offsetting to parent/child type.
* `idm::check/get_unlocked` can now be used anywhere.
Misc fixes:
* Fixes some segfaults with RPCN with interaction with IDM.
* Fix deadlocks in access violation handler due locking recursion.
* Fixes race condition in process exit-spawn on memory containers read.
* Fix bug that theoretically can prevent RPCS3 from booting - fix `id_manager::typeinfo` comparison to compare members instead of `memcmp` which can fail spuriously on padding bytes.
* Ensure all IDM inherited types of base, either has `id_base` or `id_type` defined locally, this allows to make getters such as `idm::get_unlocked<lv2_socket, lv2_socket_raw>()` which were broken before. (requires save-states invalidation)
* Removes broken operator[] overload of `stx::shared_ptr` and `stx::single_ptr` for non-array types.
* Make vm::unmap atomic, squash the memory unmapping process inside this function while still using the same VM mutex ownership.
* Make vm::unmap not fail due to random vm::get calls, shared_ptr reference count is no longer a condition.
* Fix sys_mmapper_free_address spuriously failing with EBUSY due to random vm::get calls.
* Fix sys_vm_unmap race condition due to non-atomic vm::unmap.
* Add an optional verification block ptr arg to vm::unmap, used by patches.
Rewritten the following global utility constants:
`umax` returns max number, restricted to unsigned.
`smax` returns max signed number, restricted to integrals.
`smin` returns min signed number, restricted to signed.
`amin` returns smin or zero, less restricted.
`amax` returns smax or umax, less restricted.
Fix operators == and <=> for synthesized rel-ops.
* Fix typo in lv2_obj::create.
* Always save ipc_key as 0 for non-shared object creations, regardless of thbe value set by creation attribute.
* Show IPC key of shared memory (sys_mmapper) memory objects in kernel explorer.
* Remove custom event queue's IPC management of favour of universal LV2 approach.
* Move ipc_manager to FXO.
* Fix ipc_manager internal storage memory leak: deallocate entry when IPC object destroyed.
* Rewrite lv2_obj::create to be simpler (remove many duplicated code).
* Always execute lv2_obj::create under both IPC and IDM mutexes at once (not in non-atomic single-steps). Fixing potential case where concurrency can cause IDM to contain 2 or more different objects with the same IPC key with SYS_SYNC_NOT_CARE (instead of the same object).
* Do not rely on smart ptr reference count to tell if the object exists. Use similar approach as event queues as it makes error checkings accurate.
* Optimize lv2_event_port by using std::shared_ptr for queue which wasn't allowed before.
* Use atomic waitables instead instead of global thread wait as often as possible.
* Add ::is_stopped() and and ::is_paued() which can be used in atomic loops and with atomic wait. (constexpr cpu flags test functions)
* Fix notification bug of sys_spu_thread_group_exit/terminate. (old bug, enhanced by #9117)
* Function time statistics at Emu.Stop() restored. (instead of current "X syscall failed with 0x00000000 : 0")
* Fix a corner case where SPU thread has the same ID as a PPU thread.
* Fix a potential deadlock on Emu.Stop() while sending event in EBUSY loop.
* Thread specific notifications.
Enables trivial synchronization between shared mem.
Reduces memory usage, but potentially degrades performance.
Rename an overload of vm::passive_lock to vm::range_lock.
- Fix a typo in OpenAL
- Fix typo in cellHttp.h
- Unused variables in catch
- Use 64-bit shifts
- Use use_count with shared pointers, unique is depracated and getting removed
- Explicitly cast boolean to int
- Signed/unsigned issues with loop variables
- Fix missing return statement (the code path is unreachable, but compiler wants a return)
- */ ouside of comment
- Fix duplicate layout name
- Add vm_locking.h and vm_reservation.h and move relevant functions
and types to these headers.
- Change include order and make vm_ptr.h, vm_var.h and vm_ref.h headers
usable invidually and them including vm.h instead of other way around
- Because usage of vm::ptr now requires including vm_ptr.h instead of
vm.h updated multiple #includes
- Added additional #includes to vm_reservation.h and vm_locking to
where vm::reservation_* and locking related functions are used
* Implement both RawSPU and threaded SPU page fault recovery
* Guard page_fault_notification_entries access with a mutex
* Add missing lock in sys_ppu_thread_recover_page_fault/get_page_fault_context
* Fix EINVAL check in sys_ppu_thread_recover_page_fault, previously when the event was not found begin() was erased and
CELL_OK was returned.
* Fixed page fault recovery waiting logic:
- Do not rely on a single thread_ctrl notification (unsafe)
- Avoided a race where ::awake(ppu) can be called before ::sleep(ppu) therefore nop-ing out the notification
* Avoid inconsistencies with vm flags on page fault cause detection
* Fix sys_mmapper_enable_page_fault_notification EBUSY check
from RE it's allowed to register the same queue twice (on a different area) but not to enable page fault notifications twice
TODO: From hw testing, it seems like sys_memory_get_page_attribute and sys_rsx_context_iomap check page size a little differently
get_page_attribute() always go by area flags, sys_rsx_context_iomap checks page by the page granularity
This means that if the area page size 64k, but shared memory is mapped with SYS_MEMORY_GRANULARITY_1M
It can be mapped for rsxio, but the page attribute will indicate 64k page size :thonk:
rsxio memory is verified to need 1m pages.
* Fix 0 vm page flags to behave like 1m flags, follows c8a681e60
* check if address exists and valid for rsx io allcations (must be allocated on 1m pages)