summaryrefslogtreecommitdiff
path: root/tcg
AgeCommit message (Collapse)AuthorFilesLines
2014-03-27tcg-arm: Avoid ldrd/strd for user-only emulationRichard Henderson1-4/+17
The arm ldrd/strd insns must cause alignment traps, whereas at least for armv7 ldr/str must handle unaligned operations. While this is hardly the only problem facing user-only emu, this solves one problem for i386 on armv7 emulation. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reported-by: Huw Davies <huw@codeweavers.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Convert to new ldst opcodesRichard Henderson2-100/+53
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Convert to new ldst helpersRichard Henderson1-59/+131
All of the helpers with the explicit big/little endian option require the return address as a parameter. Acquire this via a trampoline. Move the load of areg0 into the trampoline. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Tidy tcg_out_tlb_load interfaceRichard Henderson1-40/+30
Pass address registers explicitly, rather than as indicies of args[]. It's two argument registers either way. Use more TCGReg as appropriate. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Use TCGMemOp within qemu_ldst routinesRichard Henderson1-51/+65
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Improve tcg_out_moviRichard Henderson1-21/+31
If bits 31:13 are zero, reduce the insn count by one. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Dont handle constant arguments to ext32 opsRichard Henderson1-12/+4
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Don't handle remainderRichard Henderson2-23/+2
The generic fallback is exactly what we implemented. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Use intptr_t as appropriateRichard Henderson1-11/+9
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Tidy call+jump patternsRichard Henderson1-19/+19
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Fix tlb readRichard Henderson1-21/+15
We were computing the full address into %o0 and then not using it. Adjust some of the computation to rely less on having to pull immediate values into registers. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-17tcg-sparc: Fix ld64 for 32-bit modeRichard Henderson1-0/+1
Since were not using an annulled branch, we need to put a nop in the delay slot. Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-14tcg-aarch64: Introduce tcg_out_insn_3405Richard Henderson1-21/+27
Cleaning up the implementation of tcg_out_movi at the same time. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Support div, remRichard Henderson2-13/+45
Clean up multiply at the same time. For remainder, generic code will produce mul+sub, whereas we can implement with msub. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Support muluh, mulshRichard Henderson2-2/+14
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Support add2, sub2Richard Henderson2-4/+80
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Support depositRichard Henderson2-21/+49
Also tidy the implementation of ubfm, sbfm, extr in order to share code. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Use tcg_out_insn for setcondRichard Henderson1-9/+3
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Support movcondRichard Henderson2-2/+36
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Support andc, orc, eqv, not, negRichard Henderson2-10/+67
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Handle constant operands to and, or, xorRichard Henderson1-49/+107
Handle a simplified set of logical immediates for the moment. The way gcc and binutils do it, with 52k worth of tables, and a binary search depth of log2(5334) = 13, seems slow for the most common cases. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Handle constant operands to add, sub, and compareRichard Henderson1-22/+78
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Implement mov with tcg_out_insnRichard Henderson1-15/+9
Avoid the magic numbers in the current implementation. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Introduce tcg_out_insn_3401Richard Henderson1-46/+26
This merges the implementation of tcg_out_addi and tcg_out_subi. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Convert shift insns to tcg_out_insnRichard Henderson1-31/+21
Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-14tcg-aarch64: Introduce tcg_out_insnRichard Henderson1-36/+58
Converting the add/sub (3.5.2) and logical shifted (3.5.10) instruction groups to the new scheme. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Tested-by: Claudio Fontana <claudio.fontana@huawei.com>
2014-03-08tcg-aarch64: Remove nop from qemu_st slow pathRichard Henderson1-7/+0
Commit 023261ef851b22a04f6c5d76da870051031757a6 failed to remove a nop that's no longer required. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Simplify tcg_out_ldst_9 encodingRichard Henderson1-12/+2
At first glance the code appears to be using 1's compliment encoding, a-la AArch32. Except that the constant is "off", creating a complicated split field 2's compliment encoding. Much clearer to just use a normal mask and shift. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Use intptr_t apropriatelyRichard Henderson1-28/+21
As opposed to tcg_target_long. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Remove the shift_imm parameter from tcg_out_cmpRichard Henderson1-6/+5
It was unused. Let's not overcomplicate things before we need them. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Hoist common argument loads in tcg_out_opRichard Henderson1-45/+50
This reduces the code size of the function significantly. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Don't handle mov/movi in tcg_out_opRichard Henderson1-13/+7
Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Set ext based on TCG_OPF_64BITRichard Henderson1-21/+7
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Change all ext variables to TCGTypeRichard Henderson1-27/+37
We assert that the values for _I32 and _I64 are 0 and 1 respectively. This will make a couple of functions declared by tcg.c cleaner. Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-08tcg-aarch64: Remove redundant CPU_TLB_ENTRY_BITS checkRichard Henderson1-6/+0
Removed from other targets in 56bbc2f967ce185fa1c5c39e1aeb5b68b26242e9. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Claudio Fontana <claudio.fontana@huawei.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-03-02tcg: Fix typo in comment (dependancies -> dependencies)Stefan Weil1-1/+1
Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2014-02-21tcg/i386: Fix build for systems without working cpuid.h (MacOSX, Win32)Peter Maydell1-1/+3
Win32 doesn't have a cpuid.h, and MacOSX may have one but without the __cpuid() function we use, which means that commit 9d2eec20 broke the build for those platforms. Fix this by tightening up our configure cpuid.h check to test that the functions we need are present, and adding some missing #ifdef guards in tcg/i386/tcg-target.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/i386: Use SHLX/SHRX/SARX instructionsRichard Henderson1-11/+50
These three-operand shift instructions do not require the shift count to be placed into ECX. This reduces the number of mov insns required, with the mere addition of a new register constraint. Don't attempt to get rid of the matching constraint, as that's impossible to manipulate with just a new constraint. In addition, constant shifts still need the matching constraint. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/i386: Use ANDN instructionRichard Henderson2-13/+45
Note that the optimizer cannot simplify ANDC X,Y,C to AND X,Y,~C so we must handle constants in the implementation of andc. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/i386: Add tcg_out_vex_modrmRichard Henderson1-3/+38
Prepare for emitting BMI insns which require VEX encoding. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/i386: Move TCG_CT_CONST_* to tcg-target.cRichard Henderson2-3/+4
These are not needed by users of tcg-target.h. No need to recompile when we adjust them. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: Add more identity simplificationsRichard Henderson1-15/+24
Recognize 0 operand to andc, and -1 operands to and, orc, eqv. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: Optmize ANDC X,Y,Y to MOV X,0Richard Henderson1-0/+1
Like we already do for SUB and XOR. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: Simply some logical ops to NOTRichard Henderson1-0/+57
Given, of course, an appropriate constant. These could be generated from the "canonical" operation for inversion on the guest, or via other optimizations. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: Handle known-zeros masks for ANDCRichard Henderson1-0/+11
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: add known-zero bits compute for load opsAurelien Jarno1-1/+25
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: improve known-zero bits for 32-bit opsAurelien Jarno1-0/+6
The shl_i32 op might set some bits of the unused 32 high bits of the mask. Fix that by clearing the unused 32 high bits for all 32-bit ops except load/store which operate on tl values. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: fix known-zero bits optimizationAurelien Jarno1-1/+7
Known-zero bits optimization is a great idea that helps to generate more optimized code. However the current implementation only works in very few cases as the computed mask is not saved. Fix this to make it really working. Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg/optimize: fix known-zero bits for right shift opsAurelien Jarno1-5/+14
32-bit versions of sar and shr ops should not propagate known-zero bits from the unused 32 high bits. For sar it could even lead to wrong code being generated. Cc: qemu-stable@nongnu.org Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-02-17tcg-arm: The shift count of op_rotl_i32 is in args[2] not args[1].Huw Davies1-1/+1
It's this that should be subtracted from 0x20 when converting to a right rotate. Cc: qemu-stable@nongnu.org Signed-off-by: Huw Davies <huw@codeweavers.com> Signed-off-by: Richard Henderson <rth@twiddle.net>