- Avoid re-locking memory if there is no reason to do so (no draws issued)
- Actively bound regions should always get written to the backing cache
- Forcefully read memory during download if writes to the target have occured since last sync event
- Unroll main compute queue loop
- Do NOT run GPU cores on mappable memory! This has a dreadful impact on performance for obvious reasons
- Enable dynamic SSBO indexing (affects AMD)
- Make loop unrolling and loop length variable depending on hardware and find optimum
Build SPU cache after PPU, fix mixing progress
SPU ASMJIT: add support for Giga mode
SPU ASMJIT: use the same spu.log location as SPU LLVM
SPU: improve spu.log disasm
SPU: improve trampolines, unify with SPU ASMJIT
SPU: decode interrupt handler address from BR/BRA at 0x0
SPU LLVM: support Mega/Giga modes
SPU LLVM: implement function chunks
SPU LLVM: use PHI nodes, value visibility across basic blocks
SPU LLVM: implement function chunk table
New simple memory manager for LLVM (bugfix)
- Region pitch of 64 (disabled) can be used to indicate packed contents - do not assume it is the actual pitch!
- Also fixes interaction of AA factors with lockable_region size
- Use names for overlay command config and vertex data instead of std::pair.
- Make a couple of compiled_resource constructors explicitly named functions.
- Used to transfer D32S8 data where it makes sense to use this variant
- On nvidia cards, it is very slow to move aspects from D24S8 probably due to the format being faked.
For this reason, the unsafe variant is used for both D16 and D24S8 to avoid the heavy performance loss
- RADV does not keep a mapping ptr around for subsequent remap and falls back to heavy amdgpu methods every time
Explicitly manage pointer in the ring buffer structure to fix this
- Compute is now used to assist in some parts of blit operations, since there are no format conversions with vulkan like OGL does
- TODO: Integrate this into all types of GPU memory conversion operations instead of downloading to CPU then converting