summaryrefslogtreecommitdiff
path: root/target
AgeCommit message (Collapse)AuthorFilesLines
2017-10-20s390x/tcg: injection of emergency signals and external callsDavid Hildenbrand4-2/+50
Preparation for new TCG SIGP code. Especially also prepare for indicating that another external call is already pending. Take care of interrupt priority. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170928203708.9376-4-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20s390x/tcg: cleanup service interrupt injectionDavid Hildenbrand5-38/+10
There are still some leftovers from old virtio interrupts in there. Most importantly, we don't have to queue service interrupts anymore. Just like KVM, we can simply multiplex the SCLP service interrupts and avoid the queue. Also, now only valid parameters/cpu_addr will be stored on service interrupts. Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170928203708.9376-3-david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20s390x/tcg: turn INTERRUPT_EXT into a maskDavid Hildenbrand5-47/+61
External interrupts are currently all handled like floating external interrupts, they are queued. Let's prepare for a split of floating and local interrupts by turning INTERRUPT_EXT into a mask. While we can have various floating external interrupts of one kind, there is usually only one (or a fixed number) of the local external interrupts. So turn INTERRUPT_EXT into a mask and properly indicate the kind of external interrupt. Floating interrupts will have to moved out of one CPU instance later once we have SMP support. The only floating external interrupts used right now are SERVICE interrupts, so let's use that name. Following patches will clean up SERVICE interrupt injection. This get's rid of the ugly special handling for cpu timer and clock comparator interrupts. And we really only store the parameters as defined by the PoP. Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20170928203708.9376-2-david@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-20S390: use g_new() family of functionsMarc-André Lureau2-7/+7
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> [PMD: more changes in hw/s390x/css.c, added target/s390x/cpu_models.c] Message-Id: <20171006235023.11952-27-f4bug@amsat.org> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-19Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell3-110/+149
* TCG 8-byte atomic accesses bugfix (Andrew) * Report disk rotation rate (Daniel) * Report invalid scsi-disk block size configuration (Mark) * KVM and memory API MemoryListener fixes (David, Maxime, Peter Xu) * x86 CPU hotplug crash fix (Igor) * Load/store API documentation (Peter Maydell) * Small fixes by myself and Thomas * qdev DEVICE_DELETED deferral (Michael) # gpg: Signature made Wed 18 Oct 2017 10:56:24 BST # gpg: using RSA key 0xBFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (29 commits) scsi: reject configurations with logical block size > physical block size qdev: defer DEVICE_DEL event until instance_finalize() Revert "qdev: Free QemuOpts when the QOM path goes away" qdev: store DeviceState's canonical path to use when unparenting qemu-pr-helper: use new libmultipath API watch_mem_write: implement 8-byte accesses notdirty_mem_write: implement 8-byte accesses memory: reuse section_from_flat_range() kvm: simplify kvm_align_section() kvm: region_add and region_del is not called on updates kvm: fix error message when failing to unregister slot kvm: tolerate non-existing slot for log_start/log_stop/log_sync kvm: fix alignment of ram address memory: call log_start after region_add target/i386: trap on instructions longer than >15 bytes target/i386: introduce x86_ld*_code tco: add trace events docs/devel/loads-stores.rst: Document our various load and store APIs nios2: define tcg_env build: remove CONFIG_LIBDECNUMBER ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-10-19Merge remote-tracking branch 'remotes/riku/tags/pull-linux-user-20171018' ↵Peter Maydell3-3/+15
into staging Linux-user updates for Qemu 2.11 # gpg: Signature made Wed 18 Oct 2017 13:20:14 BST # gpg: using RSA key 0xB44890DEDE3C9BC0 # gpg: Good signature from "Riku Voipio <riku.voipio@iki.fi>" # gpg: aka "Riku Voipio <riku.voipio@linaro.org>" # Primary key fingerprint: FF82 03C8 C391 98AE 0581 41EF B448 90DE DE3C 9BC0 * remotes/riku/tags/pull-linux-user-20171018: linux-user: Fix TARGET_MTIOCTOP/MTIOCGET/MTIOCPOS values linux-user/main: support dfilter linux-user: Fix target FS_IOC_GETFLAGS and FS_IOC_SETFLAGS numbers linux-user/sh4: Reduce TARGET_VIRT_ADDR_SPACE_BITS to 31 linux-user: Tidy and enforce reserved_va initialization tcg: Fix off-by-one in assert in page_set_flags linux-user: Allow -R values up to 0xffff0000 for 32-bit ARM guests linux-user: remove duplicate break in syscall target/m68k,linux-user: manage FP registers in ucontext linux-user: fix O_TMPFILE handling Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-10-17ppc: spapr: use generic cpu_model parsingIgor Mammedov3-5/+9
use generic cpu_model parsing introduced by (6063d4c0f vl.c: convert cpu_model to cpu type and set of global properties before machine_init()) it allows to: * replace sPAPRMachineClass::tcg_default_cpu with MachineClass::default_cpu_type * drop cpu_parse_cpu_model() from hw/ppc/spapr.c and reuse one in vl.c * simplify spapr_get_cpu_core_type() by removing not needed anymore recurrsion since alias look up happens earlier at vl.c and spapr_get_cpu_core_type() works only with resulted from that cpu type. * spapr no more needs to parse/depend on being phased out MachineState::cpu_model, all tha parsing done by generic code and target specific callback. Signed-off-by: Igor Mammedov <imammedo@redhat.com> [dwg: Correct minor compile error] Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17ppc: move ppc_cpu_lookup_alias() before its first userIgor Mammedov1-13/+13
next commit will drop ppc_cpu_lookup_alias() declaration from header and make it static which will break its last user ppc_cpu_class_by_name() since ppc_cpu_class_by_name() defined before ppc_cpu_lookup_alias(). To avoid this move ppc_cpu_lookup_alias() right before ppc_cpu_class_by_name(). Signed-off-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17ppc: spapr: register 'host' core type along with the rest of core typesIgor Mammedov1-11/+0
consolidate 'host' core type registration by moving it from KVM specific code into spapr_cpu_core.c, similar like it's done in x86 target. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Greg Kurz <groug@kaod.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17ppc: spapr: use cpu type name directlyIgor Mammedov1-1/+1
replace sPAPRCPUCoreClass::cpu_class with cpu type name since it were needed just to get that at points it were accessed. Signed-off-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Greg Kurz <groug@kaod.org> Acked-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17ppc: move '-cpu foo,compat=xxx' parsing into ppc_cpu_parse_featurestr()Igor Mammedov2-0/+58
there is a dedicated callback CPUClass::parse_features which purpose is to convert -cpu features into a set of global properties AND deal with compat/legacy features that couldn't be directly translated into CPU's properties. Create ppc variant of it (ppc_cpu_parse_featurestr) and move 'compat=val' handling from spapr_cpu_core.c into it. That removes a dependency of board/core code on cpu_model parsing and would let to reuse common -cpu parsing introduced by 6063d4c0 Set "max-cpu-compat" property only if it exists, in practice it should limit 'compat' hack to spapr machine and allow to avoid including machine/spapr headers in target/ppc/cpu.c Signed-off-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17target/ppc: Fix carry flag setting for shift algebraic instructionsSandipan Das2-8/+20
For POWER ISA v3.0, the XER bit CA32 needs to be set by the shift right algebraic instructions whenever the CA bit is to be set. This change affects the following instructions: * Shift Right Algebraic Word (sraw[.]) * Shift Right Algebraic Word Immediate (srawi[.]) * Shift Right Algebraic Doubleword (srad[.]) * Shift Right Algebraic Doubleword Immediate (sradi[.]) Signed-off-by: Sandipan Das <sandipan@linux.vnet.ibm.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17target/ppc: Add POWER9 DD2.0 model informationDavid Gibson2-2/+5
At the moment the only POWER9 model which is listed in qemu is v1.0 (aka "DD1"). This is a very early (read, buggy) version which will never be released to the public - it was included in qemu only for the convenience of those doing bringup on the early silicon. For bonus points, we actually had its PVR incorrect in the table (0x004e0000 instead of 0x004e0100). We also never actually implemented the differences in behaviour (read, bugs) that marked DD1 in qemu. Now that we know the PVR for the substantially better v2.0 (DD2) chip, include it and make it the default POWER9 in qemu. For the time being we leave the DD1 definition in place for the poor souls (read, me) who still need to work with DD1 hardware. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-17target/ppc: Remove unused PPC 460 and 460F definitionsThomas Huth1-217/+0
We don't have any 460 or 460F CPUs in QEMU, so the init functions are just dead code. Let's simply remove them (translate_init.c is already big enough without them). Signed-off-by: Thomas Huth <thuth@redhat.com> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2017-10-16target/i386: trap on instructions longer than >15 bytesPaolo Bonzini1-7/+22
Besides being more correct, arbitrarily long instruction allow the generation of a translation block that spans three pages. This confuses the generator and even allows ring 3 code to poison the translation block cache and inject code into other processes that are in guest ring 3. This is an improved (and more invasive) fix for commit 30663fd ("tcg/i386: Check the size of instruction being translated", 2017-03-24). In addition to being more precise (and generating the right exception, which is #GP rather than #UD), it distinguishes better between page faults and too long instructions, as shown by this test case: #include <sys/mman.h> #include <string.h> #include <stdio.h> int main() { char *x = mmap(NULL, 8192, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_ANON, -1, 0); memset(x, 0x66, 4096); x[4096] = 0x90; x[4097] = 0xc3; char *i = x + 4096 - 15; mprotect(x + 4096, 4096, PROT_READ|PROT_WRITE); ((void(*)(void)) i) (); } ... which produces a #GP without the mprotect, and a #PF with it. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-16target/i386: introduce x86_ld*_codePaolo Bonzini1-103/+125
These take care of advancing s->pc, and will provide a unified point where to check for the 15-byte instruction length limit. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-16nios2: define tcg_envPaolo Bonzini1-0/+1
This should be done by all target and, since commit 53f6672bcf ("gen-icount: use tcg_ctx.tcg_env instead of cpu_env", 2017-06-30), is causing the NIOS2 target to hang. This is because the test for "should I exit to the main loop" was being done with the correct offset to the icount decrementer, but using TCG temporary 0 (the frame pointer) rather than the env pointer. Cc: qemu-stable@nongnu.org Cc: Marek Vasut <marex@denx.de> Reported-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-16build: remove CONFIG_LIBDECNUMBERPaolo Bonzini1-0/+1
It is used by all PPC targets; we can give the directory its own Makefile.objs file, and include it directly from target/ppc. target/s390 can do the same when it starts using it. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-10-16linux-user/sh4: Reduce TARGET_VIRT_ADDR_SPACE_BITS to 31Richard Henderson1-1/+5
The real kernel has TASK_SIZE as 0x7c000000, due to quirks with a couple of SH parts. But nominally user-space is limited to 2GB. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20170708025030.15845-4-rth@twiddle.net> Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
2017-10-16linux-user: Tidy and enforce reserved_va initializationRichard Henderson2-2/+10
We had a check using TARGET_VIRT_ADDR_SPACE_BITS to make sure that the allocation coming in from the command-line option was not too large, but that didn't include target-specific knowledge about other restrictions on user-space. Remove several target-specific hacks in linux-user/main.c. For MIPS and Nios, we can replace them with proper adjustments to the respective target's TARGET_VIRT_ADDR_SPACE_BITS definition. For ARM, we had no existing ifdef but I suspect that the current default value of 0xf7000000 was chosen with this in mind. Define a workable value in linux-user/arm/, and also document why the special case is required. Signed-off-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20170708025030.15845-3-rth@twiddle.net> Signed-off-by: Riku Voipio <riku.voipio@linaro.org>
2017-10-12target/arm: Implement SG instruction corner casesPeter Maydell1-1/+22
The common situation of the SG instruction is that it is executed from S&NSC memory by a CPU in NS state. That case is handled by v7m_handle_execute_nsc(). However the instruction also has defined behaviour in a couple of other cases: * SG instruction in NS memory (behaves as a NOP) * SG in S memory but CPU already secure (clears IT bits and does nothing else) * SG instruction in v8M without Security Extension (NOP) These can be implemented in translate.c. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-10-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Support some Thumb insns being always unconditionalPeter Maydell1-1/+47
A few Thumb instructions are always unconditional even inside an IT block (as opposed to being UNPREDICTABLE if used inside an IT block): BKPT, the v8M SG instruction, and the A profile HLT (debug halt) instruction. This means we need to suppress the jump-over-instruction-on-condfail code generation (though the IT state still advances as usual and subsequent insns in the IT block may be conditional). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-9-git-send-email-peter.maydell@linaro.org
2017-10-12target-arm: Simplify insn_crosses_page()Peter Maydell1-21/+6
Recent changes have left insn_crosses_page() more complicated than it needed to be: * it's only called from thumb_tr_translate_insn() so we know for certain that we're looking at a Thumb insn * the caller's check for dc->pc >= dc->next_page_start - 3 means that dc->pc can't possibly be 4 aligned, so there's no need to check that (the check was partly there to ensure that we didn't treat an ARM insn as Thumb, I think) * we now have thumb_insn_is_16bit() which lets us do a precise check of the length of the next insn, rather than opencoding an inaccurate check Simplify it down to just loading the first half of the insn and calling thumb_insn_is_16bit() on it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-8-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Pull Thumb insn word loads up to top levelPeter Maydell1-70/+108
Refactor the Thumb decode to do the loads of the instruction words at the top level rather than only loading the second half of a 32-bit Thumb insn in the middle of the decode. This is simple apart from the awkward case of Thumb1, where the BL/BLX prefix and suffix instructions live in what in Thumb2 is the 32-bit insn space. To handle these we decode enough to identify whether we're looking at a prefix/suffix that we handle as a 16 bit insn, or a prefix that we're going to merge with the following suffix to consider as a 32 bit insn. The translation of the 16 bit cases then moves from disas_thumb2_insn() to disas_thumb_insn(). The refactoring has the benefit that we don't need to pass the CPUARMState* down into the decoder code any more, but the major reason for doing this is that some Thumb instructions must be always unconditional regardless of the IT state bits, so we need to know the whole insn before we emit the "skip this insn if the IT bits and cond state tell us to" code. (The always unconditional insns are BKPT, HLT and SG; the last of these is 32 bits.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-7-git-send-email-peter.maydell@linaro.org
2017-10-12target-arm: Don't check for "Thumb2 or M profile" for not-Thumb1Peter Maydell1-2/+1
The code which implements the Thumb1 split BL/BLX instructions is guarded by a check on "not M or THUMB2". All we really need to check here is "not THUMB2" (and we assume that elsewhere too, eg in the ARCH(6T2) test that UNDEFs the Thumb2 insns). This doesn't change behaviour because all M profile cores have Thumb2 and so ARM_FEATURE_M implies ARM_FEATURE_THUMB2. (v6M implements a very restricted subset of Thumb2, but we can cross that bridge when we get to it with appropriate feature bits.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-6-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Implement secure function returnPeter Maydell3-10/+126
Secure function return happens when a non-secure function has been called using BLXNS and so has a particular magic LR value (either 0xfefffffe or 0xfeffffff). The function return via BX behaves specially when the new PC value is this magic value, in the same way that exception returns are handled. Adjust our BX excret guards so that they recognize the function return magic number as well, and perform the function-return unstacking in do_v7m_exception_exit(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-5-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Implement BLXNSPeter Maydell4-2/+76
Implement the BLXNS instruction, which allows secure code to call non-secure code. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-4-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Implement SG instructionPeter Maydell1-5/+127
Implement the SG instruction, which we emulate 'by hand' in the exception handling code path. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-3-git-send-email-peter.maydell@linaro.org
2017-10-12target/arm: Add M profile secure MMU index values to get_a32_user_mem_index()Peter Maydell1-0/+4
Add the M profile secure MMU index values to the switch in get_a32_user_mem_index() so that LDRT/STRT work correctly rather than asserting at translate time. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1507556919-24992-2-git-send-email-peter.maydell@linaro.org
2017-10-10tcg: remove addr argument from lookup_tb_ptrEmilio G. Cota8-27/+17
It is unlikely that we will ever want to call this helper passing an argument other than the current PC. So just remove the argument, and use the pc we already get from cpu_get_tb_cpu_state. This change paves the way to having a common "tb_lookup" function. Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
2017-10-09x86: Correct translation of some rdgsbase and wrgsbase encodingsTodd Eisenberger1-2/+2
It looks like there was a transcription error when writing this code initially. The code previously only decoded src or dst of rax. This resolves https://bugs.launchpad.net/qemu/+bug/1719984. Signed-off-by: Todd Eisenberger <teisenbe@google.com> Message-Id: <CAP26EVRNVb=Mq=O3s51w7fDhGVmf-e3XFFA73MRzc5b4qKBA4g@mail.gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-10-09qom/cpu: move cpu_model null check to cpu_class_by_name()Philippe Mathieu-Daudé13-55/+2
and clean every implementation. Suggested-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20170917232842.14544-1-f4bug@amsat.org> Reviewed-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: Artyom Tarasenko <atar4qemu@gmail.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2017-10-06target/arm: Factor out "get mmuidx for specified security state"Peter Maydell1-11/+21
For the SG instruction and secure function return we are going to want to do memory accesses using the MMU index of the CPU in secure state, even though the CPU is currently in non-secure state. Write arm_v7m_mmu_idx_for_secstate() to do this job, and use it in cpu_mmu_index(). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-17-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Fix calculation of secure mm_idx valuesPeter Maydell1-5/+7
In cpu_mmu_index() we try to do this: if (env->v7m.secure) { mmu_idx += ARMMMUIdx_MSUser; } but it will give the wrong answer, because ARMMMUIdx_MSUser includes the 0x40 ARM_MMU_IDX_M field, and so does the mmu_idx we're adding to, and we'll end up with 0x8n rather than 0x4n. This error is then nullified by the call to arm_to_core_mmu_idx() which masks out the high part, but we're about to factor out the code that calculates the ARMMMUIdx values so it can be used without passing it through arm_to_core_mmu_idx(), so fix this bug first. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-16-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Implement security attribute lookups for memory accessesPeter Maydell2-2/+195
Implement the security attribute lookups for memory accesses in the get_phys_addr() functions, causing these to generate various kinds of SecureFault for bad accesses. The major subtlety in this code relates to handling of the case when the security attributes the SAU assigns to the address don't match the current security state of the CPU. In the ARM ARM pseudocode for validating instruction accesses, the security attributes of the address determine whether the Secure or NonSecure MPU state is used. At face value, handling this would require us to encode the relevant bits of state into mmu_idx for both S and NS at once, which would result in our needing 16 mmu indexes. Fortunately we don't actually need to do this because a mismatch between address attributes and CPU state means either: * some kind of fault (usually a SecureFault, but in theory perhaps a UserFault for unaligned access to Device memory) * execution of the SG instruction in NS state from a Secure & NonSecure code region The purpose of SG is simply to flip the CPU into Secure state, so we can handle it by emulating execution of that instruction directly in arm_v7m_cpu_do_interrupt(), which means we can treat all the mismatch cases as "throw an exception" and we don't need to encode the state of the other MPU bank into our mmu_idx values. This commit doesn't include the actual emulation of SG; it also doesn't include implementation of the IDAU, which is a per-board way to specify hard-coded memory attributes for addresses, which override the CPU-internal SAU if they specify a more secure setting than the SAU is programmed to. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-15-git-send-email-peter.maydell@linaro.org
2017-10-06nvic: Implement Security Attribution Unit registersPeter Maydell3-0/+51
Implement the register interface for the SAU: SAU_CTRL, SAU_TYPE, SAU_RNR, SAU_RBAR and SAU_RLAR. None of the actual behaviour is implemented here; registers just read back as written. When the CPU definition for Cortex-M33 is eventually added, its initfn will set cpu->sau_sregion, in the same way that we currently set cpu->pmsav7_dregion for the M3 and M4. Number of SAU regions is typically a configurable CPU parameter, but this patch doesn't provide a QEMU CPU property for it. We can easily add one when we have a board that requires it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-14-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Add v8M support to exception entry codePeter Maydell1-20/+145
Add support for v8M and in particular the security extension to the exception entry code. This requires changes to: * calculation of the exception-return magic LR value * push the callee-saves registers in certain cases * clear registers when taking non-secure exceptions to avoid leaking information from the interrupted secure code * switch to the correct security state on entry * use the vector table for the security state we're targeting Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-13-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Add support for restoring v8M additional state contextPeter Maydell1-0/+30
For v8M, exceptions from Secure to Non-Secure state will save callee-saved registers to the exception frame as well as the caller-saved registers. Add support for unstacking these registers in exception exit when necessary. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-12-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Update excret sanity checks for v8MPeter Maydell1-15/+58
In v8M, more bits are defined in the exception-return magic values; update the code that checks these so we accept the v8M values when the CPU permits them. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-11-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Add new-in-v8M SFSR and SFARPeter Maydell2-0/+14
Add the new M profile Secure Fault Status Register and Secure Fault Address Register. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-10-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Don't warn about exception return with PC low bit set for v8MPeter Maydell1-7/+15
In the v8M architecture, return from an exception to a PC which has bit 0 set is not UNPREDICTABLE; it is defined that bit 0 is discarded [R_HRJH]. Restrict our complaint about this to v7M. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-9-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Warn about restoring to unaligned stackPeter Maydell1-0/+7
Attempting to do an exception return with an exception frame that is not 8-aligned is UNPREDICTABLE in v8M; warn about this. (It is not UNPREDICTABLE in v7M, and our implementation can handle the merely-4-aligned case fine, so we don't need to do anything except warn.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-8-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Check for xPSR mismatch usage faults earlier for v8MPeter Maydell1-3/+27
ARM v8M specifies that the INVPC usage fault for mismatched xPSR exception field and handler mode bit should be checked before updating the PSR and SP, so that the fault is taken with the existing stack frame rather than by pushing a new one. Perform this check in the right place for v8M. Since v7M specifies in its pseudocode that this usage fault check should happen later, we have to retain the original code for that check rather than being able to merge the two. (The distinction is architecturally visible but only in very obscure corner cases like attempting an invalid exception return with an exception frame in read only memory.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-7-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Restore SPSEL to correct CONTROL register on exception returnPeter Maydell1-13/+27
On exception return for v8M, the SPSEL bit in the EXC_RETURN magic value should be restored to the SPSEL bit in the CONTROL register banked specified by the EXC_RETURN.ES bit. Add write_v7m_control_spsel_for_secstate() which behaves like write_v7m_control_spsel() but allows the caller to specify which CONTROL bank to use, reimplement write_v7m_control_spsel() in terms of it, and use it in exception return. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-6-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Restore security state on exception returnPeter Maydell1-0/+2
Now that we can handle the CONTROL.SPSEL bit not necessarily being in sync with the current stack pointer, we can restore the correct security state on exception return. This happens before we start to read registers off the stack frame, but after we have taken possible usage faults for bad exception return magic values and updated CONTROL.SPSEL. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-5-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Prepare for CONTROL.SPSEL being nonzero in Handler modePeter Maydell2-23/+50
In the v7M architecture, there is an invariant that if the CPU is in Handler mode then the CONTROL.SPSEL bit cannot be nonzero. This in turn means that the current stack pointer is always indicated by CONTROL.SPSEL, even though Handler mode always uses the Main stack pointer. In v8M, this invariant is removed, and CONTROL.SPSEL may now be nonzero in Handler mode (though Handler mode still always uses the Main stack pointer). In preparation for this change, change how we handle this bit: rename switch_v7m_sp() to the now more accurate write_v7m_control_spsel(), and make it check both the handler mode state and the SPSEL bit. Note that this implicitly changes the point at which we switch active SP on exception exit from before we pop the exception frame to after it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-4-git-send-email-peter.maydell@linaro.org
2017-10-06target/arm: Don't switch to target stack early in v7M exception returnPeter Maydell1-32/+98
Currently our M profile exception return code switches to the target stack pointer relatively early in the process, before it tries to pop the exception frame off the stack. This is awkward for v8M for two reasons: * in v8M the process vs main stack pointer is not selected purely by the value of CONTROL.SPSEL, so updating SPSEL and relying on that to switch to the right stack pointer won't work * the stack we should be reading the stack frame from and the stack we will eventually switch to might not be the same if the guest is doing strange things Change our exception return code to use a 'frame pointer' to read the exception frame rather than assuming that we can switch the live stack pointer this early. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 1506092407-26985-3-git-send-email-peter.maydell@linaro.org
2017-10-06arm: Fix SMC reporting to EL2 when QEMU provides PSCIJan Kiszka2-11/+25
This properly forwards SMC events to EL2 when PSCI is provided by QEMU itself and, thus, ARM_FEATURE_EL3 is off. Found and tested with the Jailhouse hypervisor. Solution based on suggestions by Peter Maydell. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Message-id: 4f243068-aaea-776f-d18f-f9e05e7be9cd@siemens.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2017-10-06s390x/tcg: initialize machine check queueCornelia Huck1-0/+2
Just as for external interrupts and I/O interrupts, we need to initialize mchk_index during cpu reset. Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2017-10-06s390/kvm: Support for get/set of extended TOD-Clock for guestCollin L. Walling4-8/+63
Provides an interface for getting and setting the guest's extended TOD-Clock via a single ioctl to kvm. If the ioctl fails because it is not support by kvm, then we fall back to the old style of retrieving the clock via two ioctls. Signed-off-by: Collin L. Walling <walling@linux.vnet.ibm.com> Reviewed-by: Eric Farman <farman@linux.vnet.ibm.com> Reviewed-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> [split failure change from epoch index change] Message-Id: <20171004105751.24655-2-borntraeger@de.ibm.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com> [some cosmetic fixes]