summaryrefslogtreecommitdiff
path: root/target/arm
AgeCommit message (Collapse)AuthorFilesLines
2017-02-28armv7m: Raise correct kind of UsageFault for attempts to execute ARM codePeter Maydell3-2/+11
M profile doesn't implement ARM, and the architecturally required behaviour for attempts to execute with the Thumb bit clear is to generate a UsageFault with the CFSR INVSTATE bit set. We were incorrectly implementing this as generating an UNDEFINSTR UsageFault; fix this. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28armv7m: Check exception return consistencyPeter Maydell2-12/+112
Implement the exception return consistency checks described in the v7M pseudocode ExceptionReturn(). Inspired by a patch from Michael Davidsaver's series, but this is a reimplementation from scratch based on the ARM ARM pseudocode. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28armv7m: Extract "exception taken" code into functionsPeter Maydell1-50/+68
Extract the code from the tail end of arm_v7m_do_interrupt() which enters the exception handler into a pair of utility functions v7m_exception_taken() and v7m_push_stack(), which correspond roughly to the pseudocode PushStack() and ExceptionTaken(). This also requires us to move the arm_v7m_load_vector() utility routine up so we can call it. Handling illegal exception returns has some cases where we want to take a UsageFault either on an existing stack frame or with a new stack frame but with a specific LR value, so we want to be able to call these without having to go via arm_v7m_cpu_do_interrupt(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28armv7m: Simpler and faster exception startMichael Davidsaver1-6/+9
All the places in armv7m_cpu_do_interrupt() which pend an exception in the NVIC are doing so for synchronous exceptions. We know that we will always take some exception in this case, so we can just acknowledge it immediately, rather than returning and then immediately being called again because the NVIC has raised its outbound IRQ line. Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> [PMM: tweaked commit message; added DEBUG to the set of exceptions we handle immediately, since it is synchronous when it results from the BKPT instruction] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28armv7m: Remove unused armv7m_nvic_acknowledge_irq() return valuePeter Maydell2-2/+2
Having armv7m_nvic_acknowledge_irq() return the new value of env->v7m.exception and its one caller assign the return value back to env->v7m.exception is pointless. Just make the return type void instead. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28armv7m: Escalate exceptions to HardFault if necessaryMichael Davidsaver1-2/+0
The v7M exception architecture requires that if a synchronous exception cannot be taken immediately (because it is disabled or at too low a priority) then it should be escalated to HardFault (and the HardFault exception is then taken). Implement this escalation logic. Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> [PMM: extracted from another patch] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28armv7m: Fix condition check for taking exceptionsPeter Maydell2-8/+16
The M profile condition for when we can take a pending exception or interrupt is not the same as that for A/R profile. The code originally copied from the A/R profile version of the cpu_exec_interrupt function only worked by chance for the very simple case of exceptions being masked by PRIMASK. Replace it with a call to a function in the NVIC code that correctly compares the priority of the pending exception against the current execution priority of the CPU. [Michael Davidsaver's patchset had a patch to do something similar but the implementation ended up being a rewrite.] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
2017-02-28Add missing fp_access_check() to aarch64 crypto instructionsNick Reilly1-0/+12
The aarch64 crypto instructions for AES and SHA are missing the check for if the FPU is enabled. Signed-off-by: Nick Reilly <nreilly@blackberry.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24tcg: enable MTTCG by default for ARM on x86 hostsAlex Bennée1-0/+3
This enables the multi-threaded system emulation by default for ARMv7 and ARMv8 guests using the x86_64 TCG backend. This is because on the guest side: - The ARM translate.c/translate-64.c have been converted to - use MTTCG safe atomic primitives - emit the appropriate barrier ops - The ARM machine has been updated to - hold the BQL when modifying shared cross-vCPU state - defer powerctl changes to async safe work All the host backends support the barrier and atomic primitives but need to provide same-or-better support for normal load/store operations. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Acked-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Pranith Kumar <bobby.prani@gmail.com> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
2017-02-24target-arm: ensure all cross vCPUs TLB flushes completeAlex Bennée1-96/+69
Previously flushes on other vCPUs would only get serviced when they exited their TranslationBlocks. While this isn't overly problematic it violates the semantics of TLB flush from the point of view of source vCPU. To solve this we call the cputlb *_all_cpus_synced() functions to do the flushes which ensures all flushes are completed by the time the vCPU next schedules its own work. As the TLB instructions are modelled as CP writes the TB ends at this point meaning cpu->exit_request will be checked before the next instruction is executed. Deferring the work until the architectural sync point is a possible future optimisation. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24target-arm: don't generate WFE/YIELD calls for MTTCGAlex Bennée3-6/+29
The WFE and YIELD instructions are really only hints and in TCG's case they were useful to move the scheduling on from one vCPU to the next. In the parallel context (MTTCG) this just causes an unnecessary cpu_exit and contention of the BQL. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24target-arm/powerctl: defer cpu reset work to CPU contextAlex Bennée7-74/+201
When switching a new vCPU on we want to complete a bunch of the setup work before we start scheduling the vCPU thread. To do this cleanly we defer vCPU setup to async work which will run the vCPUs execution context as the thread is woken up. The scheduling of the work will kick the vCPU awake. This avoids potential races in MTTCG system emulation. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24cputlb and arm/sparc targets: convert mmuidx flushes from varg to bitmapAlex Bennée1-44/+72
While the vargs approach was flexible the original MTTCG ended up having munge the bits to a bitmap so the data could be used in deferred work helpers. Instead of hiding that in cputlb we push the change to the API to make it take a bitmap of MMU indexes instead. For ARM some the resulting flushes end up being quite long so to aid readability I've tended to move the index shifting to a new line so all the bits being or-ed together line up nicely, for example: tlb_flush_page_by_mmuidx(other_cs, pageaddr, (1 << ARMMMUIdx_S1SE1) | (1 << ARMMMUIdx_S1SE0)); Signed-off-by: Alex Bennée <alex.bennee@linaro.org> [AT: SPARC parts only] Reviewed-by: Artyom Tarasenko <atar4qemu@gmail.com> Reviewed-by: Richard Henderson <rth@twiddle.net> [PM: ARM parts only] Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-24tcg: drop global lock during TCG code executionJan Kiszka2-4/+45
This finally allows TCG to benefit from the iothread introduction: Drop the global mutex while running pure TCG CPU code. Reacquire the lock when entering MMIO or PIO emulation, or when leaving the TCG loop. We have to revert a few optimization for the current TCG threading model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not kicking it in qemu_cpu_kick. We also need to disable RAM block reordering until we have a more efficient locking mechanism at hand. Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here. These numbers demonstrate where we gain something: 20338 jan 20 0 331m 75m 6904 R 99 0.9 0:50.95 qemu-system-arm 20337 jan 20 0 331m 75m 6904 S 20 0.9 0:26.50 qemu-system-arm The guest CPU was fully loaded, but the iothread could still run mostly independent on a second core. Without the patch we don't get beyond 32206 jan 20 0 330m 73m 7036 R 82 0.9 1:06.00 qemu-system-arm 32204 jan 20 0 330m 73m 7036 S 21 0.9 0:17.03 qemu-system-arm We don't benefit significantly, though, when the guest is not fully loading a host CPU. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Message-Id: <1439220437-23957-10-git-send-email-fred.konrad@greensocs.com> [FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex] Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com> [EGC: fixed iothread lock for cpu-exec IRQ handling] Signed-off-by: Emilio G. Cota <cota@braap.org> [AJB: -smp single-threaded fix, clean commit msg, BQL fixes] Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Pranith Kumar <bobby.prani@gmail.com> [PM: target-arm changes] Acked-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-10target-arm: Enable vPMU support under TCG modeWei Huang2-7/+2
This patch contains several fixes to enable vPMU under TCG mode. It first removes the checking of kvm_enabled() while unsetting ARM_FEATURE_PMU. With it, the .pmu option can be used to turn on/off vPMU under TCG mode. Secondly the PMU node of DT table is now created under TCG. The last fix is to disable the masking of PMUver field of ID_AA64DFR0_EL1. Signed-off-by: Wei Huang <wei@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1486504171-26807-5-git-send-email-wei@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-10target-arm: Add support for PMU register PMINTENSET_EL1Wei Huang2-2/+10
This patch adds access support for PMINTENSET_EL1. Signed-off-by: Wei Huang <wei@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1486504171-26807-4-git-send-email-wei@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-10target-arm: Add support for AArch64 PMU register PMXEVTYPER_EL0Wei Huang2-6/+25
In order to support Linux perf, which uses PMXEVTYPER register, this patch adds read/write access support for PMXEVTYPER. The access is CONSTRAINED UNPREDICTABLE when PMSELR is not 0x1f. Additionally this patch adds support for PMXEVTYPER_EL0. Signed-off-by: Wei Huang <wei@redhat.com> Message-id: 1486504171-26807-3-git-send-email-wei@redhat.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-10target-arm: Add support for PMU register PMSELR_EL0Wei Huang2-6/+22
This patch adds support for AArch64 register PMSELR_EL0. The existing PMSELR definition is revised accordingly. Signed-off-by: Wei Huang <wei@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> [PMM: Moved #ifndef CONFIG_USER_ONLY to cover new regdefs] Message-id: 1486504171-26807-2-git-send-email-wei@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-07target/arm: A32, T32: Create Instruction Syndromes for Data AbortsPeter Maydell3-63/+149
Add support for generating the ISS (Instruction Specific Syndrome) for Data Abort exceptions taken from AArch32. These syndromes are used by hypervisors for example to trap and emulate memory accesses. This is the equivalent for AArch32 guests of the work done for AArch64 guests in commit aaa1f954d4cab243. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-02-07target/arm: Abstract out pbit/wbit tests in ARM ldr/str decodePeter Maydell1-3/+6
In the ARM ldr/str decode path, rather than directly testing "insn & (1 << 21)" and "insn & (1 << 24)", abstract these bits out into wbit and pbit local flags. (We will want to do more tests against them to determine whether we need to provide syndrome information.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-02-07arm: Correctly handle watchpoints for BE32 CPUsJulian Brown3-0/+30
In BE32 mode, sub-word size watchpoints can fail to trigger because the address of the access is adjusted in the opcode helpers before being compared with the watchpoint registers. This patch reverses the address adjustment before performing the comparison with the help of a new CPUClass hook. This version of the patch augments and tidies up comments a little. Signed-off-by: Julian Brown <julian@codesourcery.com> Message-id: caaf64ffc72f6ae183015337b7afdbd4b8989cb6.1484929304.git.julian@codesourcery.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-07Fix Thumb-1 BE32 execution and disassembly.Julian Brown2-1/+32
Thumb-1 code has some issues in BE32 mode (as currently implemented). In short, since bytes are swapped within words at load time for BE32 executables, this also swaps pairs of adjacent Thumb-1 instructions. This patch un-swaps those pairs of instructions again, both for execution, and for disassembly. (The previous version of the patch always read four bytes in arm_read_memory_func and then extracted the proper two bytes, in a probably misguided attempt to match the behaviour of actual hardware as described by e.g. the ARM9TDMI TRM, section 3.3 "Endian effects for instruction fetches". It's less complicated to just read the correct two bytes though.) Signed-off-by: Julian Brown <julian@codesourcery.com> Message-id: ca20462a044848000370318a8bd41dd0a4ed273f.1484929304.git.julian@codesourcery.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-07target/arm: Add cfgend parameter for ARM CPU selection.Julian Brown2-0/+20
Add a new "cfgend" property which selects whether the CPU resets into big-endian mode or not. This setting affects whether we reset with SCTLR_B (ARMv6 and earlier) or SCTLR_EE (ARMv7 and later) set. Signed-off-by: Julian Brown <julian@codesourcery.com> Message-id: 11420d1c49636c1790e60578ee996e51f0f0b835.1484929304.git.julian@codesourcery.com [PMM: use error_report_err() rather than error_report(); move the integratorcp changes to their own patch; drop an unnecessary extra #include; rephrase commit message accordingly; move setting of reset_sctlr above registration of cpregs so it actually has an effect] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-01arm: add trailing ; after MISMATCH_CHECKMichael S. Tsirkin1-49/+49
Macro calls without a trailing ; look weird in C, this works as a side effect of how QEMU_BUILD_BUG_ON is implemented. Fix this up. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2017-02-01arm: better stub version for MISMATCH_CHECKMichael S. Tsirkin1-1/+3
stub version of MISMATCH_CHECK is empty so it's easy to misuse for people not building kvm on arm. Use QEMU_BUILD_BUG_ON similar to the non-stub version to make it easier to catch bugs. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
2017-01-27armv7m: R14 should reset to 0xffffffffPeter Maydell1-0/+3
For M profile (unlike A profile) the reset value of R14 is specified as 0xffffffff. (The rationale is that this is an illegal exception return value, so if guest code tries to return to it it will result in a helpful exception.) Registers r0 to r12 and the flags are architecturally UNKNOWN on reset, so we leave those at zero. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1485285380-10565-11-git-send-email-peter.maydell@linaro.org
2017-01-27armv7m: FAULTMASK should be 0 on resetMichael Davidsaver1-4/+6
For M profile CPUs, FAULTMASK should be 0 on reset, like PRIMASK. QEMU stores FAULTMASK in the PSTATE F bit, so (as with PRIMASK in the I bit) we have to clear these to undo the A profile default of 1. Update the comment accordingly and move it so that it's closer to the code it's referring to. Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1485285380-10565-10-git-send-email-peter.maydell@linaro.org [PMM: rewrote commit message, moved comments] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-01-27armv7m: Report no-coprocessor faults correctlyPeter Maydell3-0/+13
For v7M attempts to access a nonexistent coprocessor are reported differently from plain undefined instructions (as UsageFaults of type NOCP rather than type UNDEFINSTR). Split them out into a new EXCP_NOCP so we can report the FSR value correctly. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1485285380-10565-8-git-send-email-peter.maydell@linaro.org
2017-01-27armv7m: set CFSR.UNDEFINSTR on undefined instructionsMichael Davidsaver1-0/+1
When we take an exception for an undefined instruction, set the appropriate CFSR bit. Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1485285380-10565-7-git-send-email-peter.maydell@linaro.org [PMM: tweaked commit message, comment] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-01-27armv7m: honour CCR.STACKALIGN on exception entryMichael Davidsaver1-4/+2
The CCR.STACKALIGN bit controls whether the CPU is supposed to force 8-alignment of the stack pointer on entry to the exception handler. Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> Message-id: 1485285380-10565-6-git-send-email-peter.maydell@linaro.org [PMM: commit message and comment tweaks] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-01-27armv7m: add state for v7M CCR, CFSR, HFSR, DFSR, MMFAR, BFARPeter Maydell3-2/+69
Add the structure fields, VMState fields, reset code and macros for the v7M system control registers CCR, CFSR, HFSR, DFSR, MMFAR and BFAR. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1485285380-10565-4-git-send-email-peter.maydell@linaro.org
2017-01-27target/arm: Drop IS_M() macroPeter Maydell3-8/+2
We only use the IS_M() macro in two places, and it's a bit of a namespace grab to put in cpu.h. Drop it in favour of just explicitly calling arm_feature() in the places where it was used. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1485285380-10565-2-git-send-email-peter.maydell@linaro.org
2017-01-27armv7m: Clear FAULTMASK on return from non-NMI exceptionsMichael Davidsaver1-1/+6
FAULTMASK must be cleared on return from all exceptions other than NMI. Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1484937883-1068-7-git-send-email-peter.maydell@linaro.org
2017-01-27armv7m: Fix reads of CONTROL register bit 1Michael Davidsaver4-17/+32
The v7m CONTROL register bit 1 is SPSEL, which indicates the stack being used. We were storing this information not in v7m.control but in the separate v7m.other_sp structure field. Unfortunately, the code handling reads of the CONTROL register didn't take account of this, and so if SPSEL was updated by an exception entry or exit then a subsequent guest read of CONTROL would get the wrong value. Using a separate structure field doesn't really gain us anything in efficiency, so drop this unnecessary complexity in favour of simply storing all the bits in v7m.control. This is a migration compatibility break for M profile CPUs only. Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1484937883-1068-6-git-send-email-peter.maydell@linaro.org [PMM: rewrote commit message; use deposit32(); use FIELD to define constants for masking and shifting of CONTROL register fields ] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-01-27armv7m: Explicit error for bad vector tableMichael Davidsaver1-1/+25
Give an explicit error and abort when a load from the vector table fails. Architecturally this should HardFault (which will then immediately fail to load the HardFault vector and go into Lockup). Since we don't model Lockup, just report this guest error via cpu_abort(). This is more helpful than the previous behaviour of reading a zero, which is the address of the reset stack pointer and not a sensible location to jump to. Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1484937883-1068-4-git-send-email-peter.maydell@linaro.org [PMM: expanded commit message] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-01-27armv7m: Replace armv7m.hack with unassigned_access handlerMichael Davidsaver2-6/+34
For v7m we need to catch attempts to execute from special addresses at 0xfffffff0 and above. Previously we did this with the aid of a hacky special purpose lump of memory in the address space and a check in translate.c for whether we were translating code at those addresses. We can implement this more cleanly using a CPU unassigned access handler which throws the exception if the unassigned access is for one of the special addresses. Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 1484937883-1068-3-git-send-email-peter.maydell@linaro.org [PMM: * drop the deletion of the "don't interrupt if PC is magic" code in arm_v7m_cpu_exec_interrupt() -- this is still required * don't generate an exception for unassigned accesses which aren't to the magic address -- although doing this is in theory correct in practice it will break currently working guests which rely on the RAZ/WI behaviour when they touch devices which we haven't modelled. * trigger EXCP_EXCEPTION_EXIT on is_exec, not !is_write ] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-01-27armv7m: MRS/MSR: handle unprivileged accessMichael Davidsaver1-42/+37
The MRS and MSR instruction handling has a number of flaws: * unprivileged accesses should only be able to read CONTROL and the xPSR subfields, and only write APSR (others RAZ/WI) * privileged access should not be able to write xPSR subfields other than APSR * accesses to unimplemented registers should log as guest errors, not abort QEMU Signed-off-by: Michael Davidsaver <mdavidsaver@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1484937883-1068-2-git-send-email-peter.maydell@linaro.org [PMM: rewrote commit message] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-01-24migration: extend VMStateInfoJianjun Duan1-4/+10
Current migration code cannot handle some data structures such as QTAILQ in qemu/queue.h. Here we extend the signatures of put/get in VMStateInfo so that customized handling is supported. put now will return int type. Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Jianjun Duan <duanj@linux.vnet.ibm.com> Message-Id: <1484852453-12728-2-git-send-email-duanj@linux.vnet.ibm.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-01-20Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell1-1/+1
* QOM interface fix (Eduardo) * RTC fixes (Gaohuai, Igor) * Memory leak fixes (Li Qiang, me) * Ctrl-a b regression (Marc-André) * Stubs cleanups and fixes (Leif, me) * hxtool tweak (me) * HAX support (Vincent) * QemuThread, exec.c and SCSI fixes (Roman, Xinhua, me) * PC_COMPAT_2_8 fix (Marcelo) * stronger bitmap assertions (Peter) # gpg: Signature made Fri 20 Jan 2017 12:49:01 GMT # gpg: using RSA key 0xBFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (35 commits) pc.h: move x-mach-use-reliable-get-clock compat entry to PC_COMPAT_2_8 bitmap: assert that start and nr are non negative Revert "win32: don't run subprocess tests on Mingw32 platform" hax: add Darwin support Plumb the HAXM-based hardware acceleration support target/i386: Add Intel HAX files kvm: move cpu synchronization code KVM: PPC: eliminate unnecessary duplicate constants ramblock-notifier: new char: fix ctrl-a b not working exec: Add missing rcu_read_unlock x86: ioapic: fix fail migration when irqchip=split x86: ioapic: dump version for "info ioapic" x86: ioapic: add traces for ioapic hxtool: emit Texinfo headings as @subsection qemu-thread: fix qemu_thread_set_name() race in qemu_thread_create() serial: fix memory leak in serial exit scsi-block: fix direction of BYTCHK test for VERIFY commands pc: fix crash in rtc_set_memory() if initial cpu is marked as hotplugged acpi: filter based on CONFIG_ACPI_X86 rather than TARGET ... # Conflicts: # include/hw/i386/pc.h
2017-01-20target-arm: Enable EL2 feature bit on A53 and A57Peter Maydell3-0/+16
Enable the ARM_FEATURE_EL2 bit on Cortex-A52 and Cortex-A57, since this is all now sufficiently implemented to work with the GICv3. We provide the usual CPU property to disable it for backwards compatibility with the older virt boards. In this commit, we disable the EL2 feature on the virt and ZynpMP boards, so there is no overall effect. Another commit will expose a board-level property to allow the user to enable EL2. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Alistair Francis <alistair.francis@xilinx.com> Message-id: 1483977924-14522-18-git-send-email-peter.maydell@linaro.org
2017-01-20target/arm/psci.c: If EL2 implemented, start CPUs in EL2Peter Maydell1-7/+18
The PSCI spec states that a CPU_ON call should cause the new CPU to be started in the highest implemented Non-secure exception level. We were incorrectly starting it at the exception level of the caller, which happens to be correct if EL2 is not implemented. Implement the correct logic as described in the PSCI 1.0 spec section 6.4: * if EL2 exists and SCR_EL3.HCE is set: start in EL2 * otherwise start in EL1 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Tested-by: Andrew Jones <drjones@redhat.com> Message-id: 1483977924-14522-17-git-send-email-peter.maydell@linaro.org
2017-01-20target-arm: Add ARMCPU fields for GIC CPU i/f configPeter Maydell2-0/+11
Add fields to the ARMCPU structure to allow CPU classes to specify the configurable aspects of their GIC CPU interface. In particular, the virtualization support allows different values for number of list registers, priority bits and preemption bits. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Alistair Francis <alistair.francis@xilinx.com> Message-id: 1483977924-14522-6-git-send-email-peter.maydell@linaro.org
2017-01-20target-arm: Expose output GPIO line for VCPU maintenance interruptPeter Maydell2-0/+5
The GICv3 support for virtualization includes an outbound maintenance interrupt signal which is asserted when the CPU interface wants to signal to the hypervisor that it needs attention. Expose this as an outbound GPIO line from the CPU object which can be wired up as a physical interrupt line by the board code (as we do already for the CPU timers). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> Reviewed-by: Alistair Francis <alistair.francis@xilinx.com> Message-id: 1483977924-14522-4-git-send-email-peter.maydell@linaro.org
2017-01-20target/arm: Implement DBGVCR32_EL2 system registerPeter Maydell1-0/+7
The DBGVCR_EL2 system register is needed to run a 32-bit EL1 guest under a Linux EL2 64-bit hypervisor. Its only purpose is to provide AArch64 with access to the state of the DBGVCR AArch32 register. Since we only have a dummy DBGVCR, implement a corresponding dummy DBGVCR32_EL2. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-01-20target/arm: Handle VIRQ and VFIQ in arm_cpu_do_interrupt_aarch32()Peter Maydell1-0/+14
To run a VM in 32-bit EL1 our AArch32 interrupt handling code needs to be able to cope with VIRQ and VFIQ exceptions. These behave like IRQ and FIQ except that we don't need to try to route them to Monitor mode. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
2017-01-19kvm: move cpu synchronization codeVincent Palatin1-1/+1
Move the generic cpu_synchronize_ functions to the common hw_accel.h header, in order to prepare for the addition of a second hardware accelerator. Signed-off-by: Stefan Weil <sw@weilnetz.de> Signed-off-by: Vincent Palatin <vpalatin@chromium.org> Message-Id: <f5c3cffe8d520011df1c2e5437bb814989b48332.1484045952.git.vpalatin@chromium.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-01-16Merge remote-tracking branch ↵Peter Maydell3-17/+19
'remotes/stsquad/tags/pull-tcg-common-tlb-reset-20170113-r1' into staging This is the same as the v3 posted except a re-base and a few extra signoffs # gpg: Signature made Fri 13 Jan 2017 14:26:46 GMT # gpg: using RSA key 0xFBD0DB095A9E2A44 # gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" # Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44 * remotes/stsquad/tags/pull-tcg-common-tlb-reset-20170113-r1: cputlb: drop flush_global flag from tlb_flush cpu_common_reset: wrap TCG specific code in tcg_enabled() qom/cpu: move tlb_flush to cpu_common_reset Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-01-13target/arm: Fix ubfx et al for aarch64Richard Henderson1-1/+1
The patch in 59a71b4c5b4e suffered from a merge failure when compared to the original patch in http://lists.nongnu.org/archive/html/qemu-devel/2016-12/msg00137.html Signed-off-by: Richard Henderson <rth@twiddle.net>
2017-01-13Merge remote-tracking branch ↵Peter Maydell1-0/+1
'remotes/ehabkost/tags/x86-and-machine-pull-request' into staging x86 and machine queue, 2017-01-17 Includes i386, CPU, NUMA, and memory backends changes. i386: target/i386: Fix bad patch application to translate.c CPU: qmp: Report QOM type name on query-cpu-definitions NUMA: numa: make -numa parser dynamically allocate CPUs masks Memory backends: qom: remove unused header monitor: reuse user_creatable_add_opts() instead of user_creatable_add() monitor: fix qmp/hmp query-memdev not reporting IDs of memory backends # gpg: Signature made Thu 12 Jan 2017 17:53:11 GMT # gpg: using RSA key 0x2807936F984DC5A6 # gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" # Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6 * remotes/ehabkost/tags/x86-and-machine-pull-request: qmp: Report QOM type name on query-cpu-definitions numa: make -numa parser dynamically allocate CPUs masks target/i386: Fix bad patch application to translate.c monitor: fix qmp/hmp query-memdev not reporting IDs of memory backends monitor: reuse user_creatable_add_opts() instead of user_creatable_add() qom: remove unused header Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-01-13cputlb: drop flush_global flag from tlb_flushAlex Bennée1-13/+13
We have never has the concept of global TLB entries which would avoid the flush so we never actually use this flag. Drop it and make clear that tlb_flush is the sledge-hammer it has always been. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> [DG: ppc portions] Acked-by: David Gibson <david@gibson.dropbear.id.au>