Commit Graph

2844 Commits

Author SHA1 Message Date
Yang Liu
6142db8647 backend/rv64: Implement GetCFlagFromNZCV 2024-03-02 19:38:46 +00:00
Yang Liu
483dcba9b6 backend/rv64: Implement basic LogicalShiftRight32 2024-03-02 19:38:46 +00:00
Yang Liu
02d8a7ff10 backend/rv64: Stub all IR instruction implementations 2024-03-02 19:38:46 +00:00
Yang Liu
208acb3026 backend/rv64: Implement A32SetCpsrNZCV 2024-03-02 19:38:46 +00:00
Yang Liu
09c6f22da9 backend/rv64: Implement GetNZCVFromOp 2024-03-02 19:38:46 +00:00
Yang Liu
f6e02048f5 backend/rv64: Implement basic Sub32 2024-03-02 19:38:46 +00:00
Yang Liu
b485553ed8 backend/rv64: Implement Identity 2024-03-02 19:38:46 +00:00
Yang Liu
1de237bf24 backend/rv64: Initial implementation of terminals 2024-03-02 19:38:46 +00:00
Yang Liu
672d43fbb7 backend/rv64: Add StackLayout to stack 2024-03-02 19:38:46 +00:00
Yang Liu
3ff8b9d346 backend/rv64: Implement UpdateAllUses 2024-03-02 19:38:46 +00:00
Yang Liu
cc2a6fd6fb backend/rv64: Implement AssertNoMoreUses and some minor tweaks 2024-03-02 19:38:46 +00:00
Yang Liu
b7cca7c53d backend/rv64: Use biscuit LI() 2024-03-02 19:38:46 +00:00
Yang Liu
f856ac9f33 backend/rv64: Add minimal toy implementation enough to execute LSLS 2024-03-02 19:38:46 +00:00
Yang Liu
62ff78d527 backend/rv64: Initial implementation of register allocator 2024-03-02 19:38:46 +00:00
Yang Liu
c47dacb1de backend/rv64: Adjust how relocations are stored 2024-03-02 19:38:46 +00:00
Yang Liu
c90c4d48d2 backend/rv64: Rework on pointer types 2024-03-02 19:38:46 +00:00
Yang Liu
d743fe8a2a backend/rv64: Add a dummy code generation 2024-03-02 19:38:46 +00:00
Yang Liu
4324b262aa backend/rv64: Add biscuit as the assembler 2024-03-02 19:38:46 +00:00
Yang Liu
a4b9b431b0 backend/rv64: Add initial RISC-V framework
RISC-V target is now compilable.
2024-03-02 19:38:46 +00:00
Ash
732a657694 Change Config to make fastmem_pointer of zero valid.
This changes Dynarmic::A32/A64::Config to store fastmem_pointer in
a std::optional<uintptr_t>, allowing the user to pass a zero base
address for the guest memory, which can be used to effectively
implement a shared address space between the host and the guest.
2024-03-02 16:31:20 +00:00
zmt00
f884bc0dfc emit_x64_vector: Implement AVX2 AVShift64 2024-02-24 17:08:27 +00:00
zmt00
879142d424 emit_x64_vector: Refactor AVX2 AVShift32, LVShift{32,64} 2024-02-24 17:08:27 +00:00
zmt00
2c0dc88715 emit_x64_vector: Implement AVX2 UnsignedRoundingShiftLeft{32,64} 2024-02-20 14:16:15 +00:00
zmt00
4f08226e0e emit_x64_vector: Refactor pre-SSE4.1 min/max instruction replacements 2024-02-17 13:17:01 +00:00
zmt00
0adc972cd9 emit_x64_vector: Optimize VectorSignedSaturatedAbs 2024-02-13 18:46:42 +00:00
Merry
69dc836977 backend/arm64: A64: Implement DumpDisassembly 2024-02-13 02:21:22 +00:00
Merry
4ae4750b5a emit_arm64_a64: Take into account currently loaded FPSR
Previously we just retrieved the last stored FPSR and used that when the guest asks for the current FPSR.
This is incorrect behaviour. We failed to take into account the current state of the host FPSR.

Here we take this into account. This bug was discovered via #795.
2024-02-13 02:19:55 +00:00
Merry
ee84ec0bb9 backend/x64: Reduce races on invalidation requests in interface
This situation occurs when RequestCacheInvalidation is called from
multiple threads. This results in unusual issues around memory
allocation which arise from concurrent access to invalid_cache_ranges.

There are several reasons for this:
1. No locking around the invalidation queue.
2. is_executing is not multithread safe.

So here we reduce any cache clear or any invalidation to raise a
CacheInvalidation halt, which we execute immediately before or
immediately after Run() instead.
2024-02-10 19:31:07 +00:00
Wunkolo
18717d216c emit_x64_vector: AVX512+GNFI implementation of EmitVectorLogicalVShift8 2024-02-10 11:38:17 +00:00
zmt00
0785a6d027 ir: Implement FPMulSub 2024-02-10 11:31:54 +00:00
Wunkolo
eb5eb9cdf7 emit_x64_vector: GNFI implementation of EmitVectorCountLeadingZeros8 2024-02-06 18:15:34 +00:00
Merry
75235ffedb emit_x64_data_processing: Exclude edge case from lea path in EmitSub
-0xffff'ffff'8000'0000 = 0x0000'0000'8000'0000 which is not a representable displacement
2024-01-31 01:41:25 +00:00
Merry
24bf921ff9 constant_propagation_pass: x + 0 == x 2024-01-30 23:10:23 +00:00
Merry
ca2cc2c4ba emit_x64_data_processing: Emit lea where possible in EmitAdd and EmitSub 2024-01-30 22:59:41 +00:00
Merry
30f1a3c628 Avoid emplace. 2024-01-30 17:32:50 +00:00
Merry
85177518d7 emit_x64_vector: Improve AVX512 implementation of EmitVectorTableLookup128 2024-01-30 00:29:12 +00:00
Merry
0f20181a45 emit_x64_vector: Fix AVX-512 implementation of EmitVectorTableLookup64 2024-01-30 00:29:12 +00:00
Merry
2ee3eacd01 emit_x64_crc32: Correct use of x64 crc32 instruction
CRC32 r32, r/m64 variant does not exist, but CRC r64, r/m64 does what we want.
2024-01-29 22:42:17 +00:00
zmt00
314ab7a462 emit_x64_vector: Implement PairedMinMax{Lower}8 2024-01-28 18:56:42 +00:00
Merry
ac9003fb78 externals: Update oaknut to 2.0.1
Merge commit 'a37f3673f8ca59a0c7046616247db1c6bc00e131'
2024-01-28 17:02:58 +00:00
Merry
bbc058c76b backend/arm64: Update for oaknut 2.0.0.
Also respect DYNARMIC_ENABLE_NO_EXECUTE_SUPPORT.
2024-01-28 16:19:33 +00:00
Merry
05f38d1989 A32: Implement VCVT{A,N,P,M} (ASIMD) 2024-01-28 11:21:08 +00:00
Merry
c9fcb695a4 A32: Correct function naming convention for VRINT{N,X,A,Z,M,P} (ASIMD) 2024-01-28 11:10:58 +00:00
Merry
c67f38b57e backend/arm64: FPVectorRoundInt{32,64}: FPCR comparisons should be made with fpcr_controlled when under scope of MaybeStandardFPSCRValue 2024-01-28 10:55:59 +00:00
Merry
f8e38809e9 A32: Implement VRINT{N,X,A,Z,M,P} (ASIMD) 2024-01-28 10:19:15 +00:00
Steveice10
8398d7ef7e arm64: Fix compiling under MSYS2 CLANGARM64. 2024-01-27 08:54:07 +00:00
Wunkolo
00c6c00e86 Refactor Xmm{B}Const to {,B}Const 2024-01-23 19:24:56 +00:00
Wunkolo
917335ae8a block_of_code: Add XmmBConst
This is a redo of https://github.com/merryhime/dynarmic/pull/690 with a
much smaller foot-print to introduce a new pattern while avoiding the
initial bugs
(5d9b720189)

**B**roadcasts a value as an **Xmm**-sized **Const**ant. Intended to
eventually encourage more hits within the constant-pool between vector
and non-vector code.
2024-01-23 19:24:56 +00:00
Wunkolo
b02292bec7 block_of_code: Rename MConst to XmmConst
`MConst` is refactored into `XmmConst` to clearly communicate the
addressable space of the newly allocated 16-byte memory constant.
2024-01-23 19:24:56 +00:00
zmt00
ba9009abd8 emit_x64_vector: Optimize VectorSignedAbsoluteDifference 2024-01-23 18:28:19 +00:00