summaryrefslogtreecommitdiff
path: root/target/arm/translate.c
AgeCommit message (Collapse)AuthorFilesLines
2018-05-09translator: merge max_insns into DisasContextBaseEmilio G. Cota1-6/+3
While at it, use int for both num_insns and max_insns to make sure we have same-type comparisons. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Michael Clark <mjc@sifive.com> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-05-09target/arm: avoid integer overflow in next_page PC checkEmilio G. Cota1-6/+5
If the PC is in the last page of the address space, next_page_start overflows to 0. Fix it. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2018-05-04target/arm: Implement v8M VLLDM and VLSTMPeter Maydell1-1/+16
For v8M the instructions VLLDM and VLSTM support lazy saving and restoring of the secure floating-point registers. Even if the floating point extension is not implemented, these instructions must act as NOPs in Secure state, so they can be used as part of the secure-to-nonsecure call sequence. Fixes: https://bugs.launchpad.net/qemu/+bug/1768295 Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180503105730.5958-1-peter.maydell@linaro.org
2018-04-26target/arm: Allow EL change hooks to do IOAaron Lindsay1-0/+12
During code generation, surround CPSR writes and exception returns which call the EL change hooks with gen_io_start/end. The immediate need is for the PMU to access the clock and icount during EL change to support mode filtering. Signed-off-by: Aaron Lindsay <alindsay@codeaurora.org> Message-id: 1523997485-1905-9-git-send-email-alindsay@codeaurora.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-04-10target-arm: Check undefined opcodes for SWP in A32 decoderOnur Sahin1-2/+7
Make sure we are not treating architecturally Undefined instructions as a SWP, by verifying the opcodes as per section A8.8.229 of ARMv7-A specification. Bits [21:20] must be zero for this to be a SWP or SWPB. We also choose to UNDEF for the architecturally UNPREDICTABLE case of bits [11:8] not being zero. Signed-off-by: Onur Sahin <onursahin08@gmail.com> [PMM: tweaked commit message] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-03-23target/arm: Honour MDCR_EL2.TDE when routing exceptions due to BKPT/BRKPeter Maydell1-5/+14
The MDCR_EL2.TDE bit allows the exception level targeted by debug exceptions to be set to EL2 for code executing at EL0. We handle this in the arm_debug_target_el() function, but this is only used for hardware breakpoint and watchpoint exceptions, not for the exception generated when the guest executes an AArch32 BKPT or AArch64 BRK instruction. We don't have enough information for a translate-time equivalent of arm_debug_target_el(), so instead make BKPT and BRK call a special purpose helper which can do the routing, rather than the generic exception_with_syndrome helper. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180320134114.30418-2-peter.maydell@linaro.org
2018-03-02target/arm: Decode t32 simd 3reg and 2reg_scalar extensionRichard Henderson1-1/+13
Happily, the bits are in the same places compared to a32. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180228193125.20577-16-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-03-02target/arm: Decode aa32 armv8.3 2-reg-indexRichard Henderson1-0/+61
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180228193125.20577-15-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-03-02target/arm: Decode aa32 armv8.3 3-sameRichard Henderson1-0/+68
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20180228193125.20577-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-03-02target/arm: Decode aa32 armv8.1 two reg and a scalarRichard Henderson1-2/+40
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180228193125.20577-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-03-02target/arm: Decode aa32 armv8.1 three sameRichard Henderson1-19/+67
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180228193125.20577-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-03-01target/arm/helper: pass explicit fpst to set_rmodeAlex Bennée1-6/+6
As the rounding mode is now split between FP16 and the rest of floating point we need to be explicit when tweaking it. Instead of passing the CPU env we now pass the appropriate fpst pointer directly. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180227143852.11175-6-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-02-09target/arm/translate.c: Fix missing 'break' for TT insnsPeter Maydell1-0/+1
The code where we added the TT instruction was accidentally missing a 'break', which meant that after generating the code to execute the TT we would fall through to 'goto illegal_op' and generate code to take an UNDEF insn. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180206103941.13985-1-peter.maydell@linaro.org
2018-02-09target/arm: Expand vector registers for SVERichard Henderson1-4/+3
Change vfp.regs as a uint64_t to vfp.zregs as an ARMVectorReg. The previous patches have made the change in representation relatively painless. Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20180123035349.24538-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-01-25target/arm: Add aa{32, 64}_vfp_{dreg, qreg} helpersRichard Henderson1-7/+9
Helpers that return a pointer into env->vfp.regs so that we isolate the logic of how to index the regs array for different cpu modes. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180119045438.28582-7-richard.henderson@linaro.org Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-01-25target/arm: Change the type of vfp.regsRichard Henderson1-1/+1
All direct users of this field want an integral value. Drop all of the extra casting between uint64_t and float64. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180119045438.28582-6-richard.henderson@linaro.org Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-01-25target/arm: Use pointers in neon tbl helperRichard Henderson1-4/+4
Rather than passing a regno to the helper, pass pointers to the vector register directly. This eliminates the need to pass in the environment pointer and reduces the number of places that directly access env->vfp.regs[]. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180119045438.28582-5-richard.henderson@linaro.org Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-01-25target/arm: Use pointers in neon zip/uzp helpersRichard Henderson1-20/+22
Rather than passing regnos to the helpers, pass pointers to the vector registers directly. This eliminates the need to pass in the environment pointer and reduces the number of places that directly access env->vfp.regs[]. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20180119045438.28582-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-01-25target/arm: Use pointers in crypto helpersRichard Henderson1-30/+38
Rather than passing regnos to the helpers, pass pointers to the vector registers directly. This eliminates the need to pass in the environment pointer and reduces the number of places that directly access env->vfp.regs[]. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20180119045438.28582-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2018-01-11target/arm: Make disas_thumb2_insn() generate its own UNDEF exceptionsPeter Maydell1-13/+10
Refactor disas_thumb2_insn() so that it generates the code for raising an UNDEF exception for invalid insns, rather than returning a flag which the caller must check to see if it needs to generate the UNDEF code. This brings the function in to line with the behaviour of disas_thumb_insn() and disas_arm_insn(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1513080506-17703-1-git-send-email-peter.maydell@linaro.org
2017-12-29tcg: Dynamically allocate TCGOpsRichard Henderson1-1/+1
With no fixed array allocation, we can't overflow a buffer. This will be important as optimizations related to host vectors may expand the number of ops used. Use QTAILQ to link the ops together. Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-12-29tcg: Remove TCGV_UNUSED* and TCGV_IS_UNUSED*Richard Henderson1-15/+14
These are now trivial sets and tests against NULL. Unwrap. Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-12-13target/arm: Implement TT instructionPeter Maydell1-1/+28
Implement the TT instruction which queries the security state and access permissions of a memory location. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1512153879-5291-8-git-send-email-peter.maydell@linaro.org
2017-12-13target/arm: Split M profile MNegPri mmu index into user and privPeter Maydell1-2/+6
For M profile, we currently have an mmu index MNegPri for "requested execution priority negative". This fails to distinguish "requested execution priority negative, privileged" from "requested execution priority negative, usermode", but the two can return different results for MPU lookups. Fix this by splitting MNegPri into MNegPriPriv and MNegPriUser, and similarly for the Secure equivalent MSNegPri. This takes us from 6 M profile MMU modes to 8, which means we need to bump NB_MMU_MODES; this is OK since the point where we are forced to reduce TLB sizes is 9 MMU modes. (It would in theory be possible to stick with 6 MMU indexes: {mpu-disabled,user,privileged} x {secure,nonsecure} since in the MPU-disabled case the result of an MPU lookup is always the same for both user and privileged code. However we would then need to rework the TB flags handling to put user/priv into the TB flags separately from the mmuidx. Adding an extra couple of mmu indexes is simpler.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1512153879-5291-5-git-send-email-peter.maydell@linaro.org
2017-12-11target/arm: Generate UNDEF for 32-bit Thumb2 insnsPeter Maydell1-1/+4
The refactoring of commit 296e5a0a6c3935 has a nasty bug: it accidentally dropped the generation of code to raise the UNDEF exception when disas_thumb2_insn() returns nonzero. This means that 32-bit Thumb2 instruction patterns that ought to UNDEF just act like nops instead. This is likely to break any number of things, including the kernel's "disable the FPU and use the UNDEF exception to identify when to turn it back on again" trick. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1513006964-3371-1-git-send-email-peter.maydell@linaro.org Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2017-11-07translate.c: Fix usermode big-endian AArch32 LDREXD and STREXDPeter Maydell1-5/+34
For AArch32 LDREXD and STREXD, architecturally the 32-bit word at the lowest address is always Rt and the one at addr+4 is Rt2, even if the CPU is big-endian. Our implementation does these with a single 64-bit store, so if we're big-endian then we need to put the two 32-bit halves together in the opposite order to little-endian, so that they end up in the right places. We were trying to do this with the gen_aa32_frob64() function, but that is not correct for the usermode emulator, because there there is a distinction between "load a 64 bit value" (which does a BE 64-bit access and doesn't need swapping) and "load two 32 bit values as one 64 bit access" (where we still need to do the swapping, like system mode BE32). Fixes: https://bugs.launchpad.net/qemu/+bug/1725267 Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1509622400-13351-1-git-send-email-peter.maydell@linaro.org
2017-10-31fix WFI/WFE length in syndrome registerStefano Stabellini1-1/+9
WFI/E are often, but not always, 4 bytes long. When they are, we need to set ARM_EL_IL_SHIFT in the syndrome register. Pass the instruction length to HELPER(wfi), use it to decrement pc appropriately and to pass an is_16bit flag to syn_wfx, which sets ARM_EL_IL_SHIFT if needed. Set dc->insn in both arm_tr_translate_insn and thumb_tr_translate_insn. Signed-off-by: Stefano Stabellini <sstabellini@kernel.org> Message-id: alpine.DEB.2.10.1710241055160.574@sstabellini-ThinkPad-X260 [PMM: move setting of dc->insn for Thumb so it is correct for 32 bit insns] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-10-27Merge remote-tracking branch 'remotes/rth/tags/pull-dis-20171026' into stagingPeter Maydell1-2/+1
Capstone disassembler # gpg: Signature made Thu 26 Oct 2017 10:57:27 BST # gpg: using RSA key 0x64DF38E8AF7E215F # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F * remotes/rth/tags/pull-dis-20171026: disas: Add capstone as submodule disas: Remove monitor_disas_is_physical ppc: Support Capstone in disas_set_info arm: Support Capstone in disas_set_info i386: Support Capstone in disas_set_info disas: Support the Capstone disassembler library disas: Remove unused flags arguments target/arm: Don't set INSN_ARM_BE32 for CONFIG_USER_ONLY target/arm: Move BE32 disassembler fixup target/ppc: Convert to disas_set_info hook target/i386: Convert to disas_set_info hook Signed-off-by: Peter Maydell <peter.maydell@linaro.org> # Conflicts: # target/i386/cpu.c # target/ppc/translate_init.c
2017-10-25disas: Remove unused flags argumentsRichard Henderson1-2/+1
Now that every target is using the disas_set_info hook, the flags argument is unused. Remove it. Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24tcg: Initialize cpu_env genericallyRichard Henderson1-4/+0
This is identical for each target. So, move the initialization to common code. Move the variable itself out of tcg_ctx and name it cpu_env to minimize changes within targets. This also means we can remove tcg_global_reg_new_{ptr,i32,i64}, since there are no longer global-register temps created by targets. Reviewed-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24tcg: define tcg_init_ctx and make tcg_ctx a pointerEmilio G. Cota1-1/+1
Groundwork for supporting multiple TCG contexts. The core of this patch is this change to tcg/tcg.h: > -extern TCGContext tcg_ctx; > +extern TCGContext tcg_init_ctx; > +extern TCGContext *tcg_ctx; Note that for now we set *tcg_ctx to whatever TCGContext is passed to tcg_context_init -- in this case &tcg_init_ctx. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24target/arm: check CF_PARALLEL instead of parallel_cpusEmilio G. Cota1-2/+7
Thereby decoupling the resulting translated code from the current state of the system. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-24tcg: convert tb->cflags reads to tb_cflags(tb)Emilio G. Cota1-3/+3
Convert all existing readers of tb->cflags to tb_cflags, so that we use atomic_read and therefore avoid undefined behaviour in C11. Note that the remaining setters/getters of the field are protected by tb_lock, and therefore do not need conversion. Luckily all readers access the field via 'tb->cflags' (so no foo.cflags, bar->cflags in the code base), which makes the conversion easily scriptable: FILES=$(git grep 'tb->cflags' target include/exec/gen-icount.h \ accel/tcg/translator.c | cut -f1 -d':' | sort | uniq) perl -pi -e 's/([^.>])tb->cflags/$1tb_cflags(tb)/g' $FILES perl -pi -e 's/([a-z->.]*)(->|\.)tb->cflags/tb_cflags($1$2tb)/g' $FILES Then manually fixed the few errors that checkpatch reported. Compile-tested for all targets. Suggested-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-12target/arm: Implement SG instruction corner casesPeter Maydell1-1/+22
The common situation of the SG instruction is that it is executed from S&NSC memory by a CPU in NS state. That case is handled by v7m_handle_execute_nsc(). However the instruction also has defined behaviour in a couple of other cases: * SG instruction in NS memory (behaves as a NOP) * SG in S memory but CPU already secure (clears IT bits and does nothing else) * SG instruction in v8M without Security Extension (NOP) These can be implemented in translate.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-10-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Support some Thumb insns being always unconditionalPeter Maydell1-1/+47
A few Thumb instructions are always unconditional even inside an IT block (as opposed to being UNPREDICTABLE if used inside an IT block): BKPT, the v8M SG instruction, and the A profile HLT (debug halt) instruction. This means we need to suppress the jump-over-instruction-on-condfail code generation (though the IT state still advances as usual and subsequent insns in the IT block may be conditional). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-9-git-send-email-peter.maydell@linaro.org
2017-10-12target-arm: Simplify insn_crosses_page()Peter Maydell1-21/+6
Recent changes have left insn_crosses_page() more complicated than it needed to be: * it's only called from thumb_tr_translate_insn() so we know for certain that we're looking at a Thumb insn * the caller's check for dc->pc >= dc->next_page_start - 3 means that dc->pc can't possibly be 4 aligned, so there's no need to check that (the check was partly there to ensure that we didn't treat an ARM insn as Thumb, I think) * we now have thumb_insn_is_16bit() which lets us do a precise check of the length of the next insn, rather than opencoding an inaccurate check Simplify it down to just loading the first half of the insn and calling thumb_insn_is_16bit() on it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-8-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Pull Thumb insn word loads up to top levelPeter Maydell1-70/+108
Refactor the Thumb decode to do the loads of the instruction words at the top level rather than only loading the second half of a 32-bit Thumb insn in the middle of the decode. This is simple apart from the awkward case of Thumb1, where the BL/BLX prefix and suffix instructions live in what in Thumb2 is the 32-bit insn space. To handle these we decode enough to identify whether we're looking at a prefix/suffix that we handle as a 16 bit insn, or a prefix that we're going to merge with the following suffix to consider as a 32 bit insn. The translation of the 16 bit cases then moves from disas_thumb2_insn() to disas_thumb_insn(). The refactoring has the benefit that we don't need to pass the CPUARMState* down into the decoder code any more, but the major reason for doing this is that some Thumb instructions must be always unconditional regardless of the IT state bits, so we need to know the whole insn before we emit the "skip this insn if the IT bits and cond state tell us to" code. (The always unconditional insns are BKPT, HLT and SG; the last of these is 32 bits.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-7-git-send-email-peter.maydell@linaro.org
2017-10-12target-arm: Don't check for "Thumb2 or M profile" for not-Thumb1Peter Maydell1-2/+1
The code which implements the Thumb1 split BL/BLX instructions is guarded by a check on "not M or THUMB2". All we really need to check here is "not THUMB2" (and we assume that elsewhere too, eg in the ARCH(6T2) test that UNDEFs the Thumb2 insns). This doesn't change behaviour because all M profile cores have Thumb2 and so ARM_FEATURE_M implies ARM_FEATURE_THUMB2. (v6M implements a very restricted subset of Thumb2, but we can cross that bridge when we get to it with appropriate feature bits.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-6-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Implement secure function returnPeter Maydell1-2/+12
Secure function return happens when a non-secure function has been called using BLXNS and so has a particular magic LR value (either 0xfefffffe or 0xfeffffff). The function return via BX behaves specially when the new PC value is this magic value, in the same way that exception returns are handled. Adjust our BX excret guards so that they recognize the function return magic number as well, and perform the function-return unstacking in do_v7m_exception_exit(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-5-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Implement BLXNSPeter Maydell1-2/+15
Implement the BLXNS instruction, which allows secure code to call non-secure code. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-4-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Add M profile secure MMU index values to get_a32_user_mem_index()Peter Maydell1-0/+4
Add the M profile secure MMU index values to the switch in get_a32_user_mem_index() so that LDRT/STRT work correctly rather than asserting at translate time. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-2-git-send-email-peter.maydell@linaro.org
2017-10-10tcg: remove addr argument from lookup_tb_ptrEmilio G. Cota1-4/+1
It is unlikely that we will ever want to call this helper passing an argument other than the current PC. So just remove the argument, and use the pc we already get from cpu_get_tb_cpu_state. This change paves the way to having a common "tb_lookup" function. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-09-07Merge remote-tracking branch ↵Peter Maydell1-4/+50
'remotes/pmaydell/tags/pull-target-arm-20170907' into staging target-arm: * cleanups converting to DEFINE_PROP_LINK * allwinner-a10: mark as not user-creatable * initial patches working towards ARMv8M support * implement generating aborts on memory transaction failures * make BXJ behave correctly (ie not UNDEF) on ARMv6-and-later # gpg: Signature made Thu 07 Sep 2017 14:26:07 BST # gpg: using RSA key 0x3C2525ED14360CDE # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" # gpg: aka "Peter Maydell <pmaydell@gmail.com>" # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE * remotes/pmaydell/tags/pull-target-arm-20170907: (31 commits) target/arm: Add Jazelle feature target/arm: Implement new do_transaction_failed hook hw/arm: Set ignore_memory_transaction_failures for most ARM boards boards.h: Define new flag ignore_memory_transaction_failures target/arm: Implement BXNS, and banked stack pointers target/arm: Move regime_is_secure() to target/arm/internals.h target/arm: Make CFSR register banked for v8M target/arm: Make MMFAR banked for v8M target/arm: Make CCR register banked for v8M target/arm: Make MPU_CTRL register banked for v8M target/arm: Make MPU_RNR register banked for v8M target/arm: Make MPU_RBAR, MPU_RLAR banked for v8M target/arm: Make MPU_MAIR0, MPU_MAIR1 registers banked for v8M target/arm: Make VTOR register banked for v8M nvic: Add NS alias SCS region target/arm: Make CONTROL register banked for v8M target/arm: Make FAULTMASK register banked for v8M target/arm: Make PRIMASK register banked for v8M target/arm: Make BASEPRI register banked for v8M target/arm: Add MMU indexes for secure v8M ... # Conflicts: # target/arm/translate.c
2017-09-07target/arm: Add Jazelle featurePortia Stephens1-1/+1
This adds a feature bit indicating support of the (trivial) Jazelle implementation if ARM_FEATURE_V6 is set or if the processor is arm926 or arm1026. This fixes the issue that any BXJ instruction will result in an illegal_op. BXJ instructions will now check if the architecture supports ARM_FEATURE_JAZELLE. Signed-off-by: Portia Stephens <portia.stephens@xilinx.com> Reviewed-by: Alistair Francis <alistair.francis@xilinx.com> Message-id: 20170905211232.11092-1-portia.stephens@xilinx.com [PMM: edited commit message and comment text a bit] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-09-07target/arm: Implement BXNS, and banked stack pointersPeter Maydell1-1/+41
Implement the BXNS v8M instruction, which is like BX but will do a jump-and-switch-to-NonSecure if the branch target address has bit 0 clear. This is the first piece of code which implements "switch to the other security state", so the commit also includes the code to switch the stack pointers around, which is the only complicated part of switching security state. BLXNS is more complicated than just "BXNS but set the link register", so we leave it for a separate commit. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1503414539-28762-21-git-send-email-peter.maydell@linaro.org
2017-09-07target/arm: Make CONTROL register banked for v8MPeter Maydell1-1/+1
Make the CONTROL register banked if v8M security extensions are enabled. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1503414539-28762-10-git-send-email-peter.maydell@linaro.org
2017-09-07target/arm: Add state field, feature bit and migration for v8M secure statePeter Maydell1-1/+7
As the first step in implementing ARM v8M's security extension: * add a new feature bit ARM_FEATURE_M_SECURITY * add the CPU state field that indicates whether the CPU is currently in the secure state * add a migration subsection for this new state (we will add the Secure copies of banked register state to this subsection in later patches) * add a #define for the one new-in-v8M exception type * make the CPU debug log print S/NS status Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1503414539-28762-4-git-send-email-peter.maydell@linaro.org
2017-09-06target/arm: Perform per-insn cross-page check only for ThumbRichard Henderson1-25/+33
ARM is a fixed-length ISA and we can compute the page crossing condition exactly once during init_disas_context. Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06target/arm: Split out thumb_tr_translate_insnRichard Henderson1-41/+80
We need not check for ARM vs Thumb state in order to dispatch disassembly of every instruction. Tested-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-09-06target/arm: Move ss check to init_disas_contextRichard Henderson1-5/+8
We can check for single-step just once. Reviewed-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Lluís Vilanova <vilanova@ac.upc.edu> Signed-off-by: Richard Henderson <rth@twiddle.net>