summaryrefslogtreecommitdiff
path: root/exec.c
AgeCommit message (Collapse)AuthorFilesLines
2012-03-05memory: fix I/O port aliasesAvi Kivity1-2/+6
Commit e58ac72b6a0 ("ioport: change portio_list not to use memory_region_set_offset()") started using aliases of I/O memory regions. Since the IORange used for the I/O was contained in the target region, the alias information (specifically, the offset into the region) was lost. This broke -vga std. Fix by allocating an independent object to hold the IORange and also the new offset. Note that I/O memory regions were conceptually broken wrt aliases in a different way: an alias can cause the same region to appear twice in an address space, but we had just one IORange to service it. This patch fixes that problem as well, since we can now have multiple IORange/MemoryRegion associations. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-03-03Merge branch 'xtensa' of git://jcmvbkbc.spb.ru/dumb/qemu-xtensaBlue Swirl1-5/+13
* 'xtensa' of git://jcmvbkbc.spb.ru/dumb/qemu-xtensa: target-xtensa: add breakpoint tests target-xtensa: add DEBUG_SECTION to overlay tool target-xtensa: add DBREAK data breakpoints exec: let cpu_watchpoint_insert accept larger watchpoints exec: fix check_watchpoint exiting cpu_loop exec: add missing breaks to the watch_mem_write target-xtensa: add ICOUNT SR and debug exception target-xtensa: implement instruction breakpoints target-xtensa: add DEBUGCAUSE SR and configuration target-xtensa: fetch 3rd opcode byte only when needed target-xtensa: implement info tlb monitor command target-xtensa: define TLB_TEMPLATE for MMU-less cores
2012-02-29memory: allow phys_map tree paths to terminate earlyAvi Kivity1-11/+17
When storing large contiguous ranges in phys_map, all values tend to be the same pointers to a single MemoryRegionSection. Collapse them by marking nodes with level > 0 as leaves. This reduces tree memory usage dramatically. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: unify PhysPageEntry::node and ::leafAvi Kivity1-20/+18
They have the same type, unify them. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: change phys_page_set() to set multiple pagesAvi Kivity1-18/+23
Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: switch phys_page_set() to a recursive implementationAvi Kivity1-26/+41
Setting multiple pages at once requires backtracking to previous nodes; easiest to achieve via recursion. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: replace phys_page_find_alloc() with phys_page_set()Avi Kivity1-11/+4
By giving the function the value we want to set, we make it more flexible for the next patch. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: simplify multipage/subpage registrationAvi Kivity1-55/+65
Instead of considering subpage on a per-page basis, split each section into a subpage head, multipage body, and subpage tail, and register each separately. This simplifies the registration functions. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: give phys_page_find() its own tree search loopAvi Kivity1-4/+13
We'll change phys_page_find_alloc() soon, but phys_page_find() doesn't need to bear the consequences. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: make phys_page_find() return a MemoryRegionSectionAvi Kivity1-139/+160
We no longer describe memory in terms of individual pages; use sections throughout instead. PhysPageDesc no longer used - remove. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: move tlb flush to MemoryListener commit callbackAvi Kivity1-8/+8
This way, if we have several changes in a single transaction, we flush just once. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: unify the two branches of cpu_register_physical_memory_log()Avi Kivity1-34/+15
Identical except that the second branch knows its not modifying an existing subpage. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: fix RAM subpages in newly initialized pagesAvi Kivity1-12/+10
If the first subpage installed in a page is RAM, then we install it as a full page, instead of a subpage. Fix by not special casing RAM. The issue dates to commit db7b5426a4b4242, which introduced subpages. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: compress phys_map node pointers to 16 bitsAvi Kivity1-9/+45
Use an expanding vector to store nodes. Allocation is baroque to g_renew() potentially invalidating pointers; this will be addressed later. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: store MemoryRegionSection pointers in phys_mapAvi Kivity1-80/+107
Instead of storing PhysPageDesc, store pointers to MemoryRegionSections. The various offsets (phys_offset & ~TARGET_PAGE_MASK, PHYS_OFFSET & TARGET_PAGE_MASK, region_offset) can all be synthesized from the information in a MemoryRegionSection. Adjust phys_page_find() to synthesize a PhysPageDesc. The upshot is that phys_map now contains uniform values, so it's easier to generate and compress. The end result is somewhat clumsy but this will be improved as we we propagate MemoryRegionSections throughout the code instead of transforming them to PhysPageDesc. The MemoryRegionSection pointers are stored as uint16_t offsets in an array. This saves space (when we also compress node pointers) and is more cache friendly. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: unify phys_map last level with intermediate levelsAvi Kivity1-43/+35
This lays the groundwork for storing leaf data in intermediate levels, saving space. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: remove first level of l1_phys_mapAvi Kivity1-21/+8
L1 and the lower levels in l1_phys_map are equivalent, except that L1 has a different size, and is always allocated. Simplify the code by removing L1. This leaves us with a tree composed solely of L2 tables, but that problem can be renamed away later. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: change memory registration to rebuild the memory map on each changeAvi Kivity1-1/+49
Instead of incrementally building the memory map, rebuild it every time. This allows later simplification, since the code need not consider overlaying a previous mapping. It is also RCU friendly. With large memory guests this can get expensive, since the operation is O(mem size), but this will be optimized later. As a side effect subpage and L2 leaks are fixed here. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: support stateless memory listenersAvi Kivity1-0/+32
Current memory listeners are incremental; that is, they are expected to maintain their own state, and receive callbacks for changes to that state. This patch adds support for stateless listeners; these work by receiving a ->begin() callback (which tells them that new state is coming), a sequence of ->region_add() and ->region_nop() callbacks, and then a ->commit() callback which signifies the end of the new state. They should ignore ->region_del() callbacks. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: split memory listener for the two address spacesAvi Kivity1-14/+66
The memory and I/O address spaces do different things, so split them into two memory listeners. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: allow MemoryListeners to observe a specific address spaceAvi Kivity1-1/+1
Ignore any regions not belonging to a specified address space. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-02-29memory: use a MemoryListener for core memory map updates tooAvi Kivity1-0/+75
This transforms memory.c into a library which can then be unit tested easily, by feeding it inputs and listening to its outputs. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-02-29memory: don't pass ->readable attribute to cpu_register_physical_memory_logAvi Kivity1-1/+1
It can be derived from the MemoryRegion itself (which is why it is not used there). Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-02-20exec: let cpu_watchpoint_insert accept larger watchpointsMax Filippov1-1/+2
Make cpu_watchpoint_insert accept watchpoints of any power-of-two size up to the target page size. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2012-02-20exec: fix check_watchpoint exiting cpu_loopMax Filippov1-1/+2
In case of BP_STOP_BEFORE_ACCESS watchpoint check_watchpoint intends to signal EXCP_DEBUG exception on exit from cpu loop, but later overwrites exception code by the cpu_resume_from_signal call. Use cpu_loop_exit with BP_STOP_BEFORE_ACCESS watchpoints. Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
2012-02-20exec: add missing breaks to the watch_mem_writeMax Filippov1-3/+9
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Reviewed-by: Andreas Färber <afaerber@suse.de> Reviewed-by: Meador Inge <meadori@codesourcery.com>
2012-02-01exec.c: Clarify comment about tlb_flush() flush_global parameterPeter Maydell1-2/+12
Clarify the comment about tlb_flush()'s flush_global parameter, so it is clearer what it does and why it is OK that the implementation currently ignores it. Reviewed-by: Andreas F=C3=A4rber <afaerber@suse.de> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-01-21virtio-pci: Fix endianness of virtio configBenjamin Herrenschmidt1-0/+14
The virtio config area in PIO space is a bit special. The initial header is little endian but the rest (device specific) is guest native endian. The PIO accessors for PCI on machines that don't have native IO ports assume that all PIO is little endian, which works fine for everything except the above. A complicated way to fix it would be to split the BAR into two memory regions with different endianess settings, but this isn't practical to do, besides, the PIO code doesn't honor region endianness anyway (I have a patch for that too but it isn't necessary at this stage). So I decided to go for the quick fix instead which consists of reverting the swap in virtio-pci in selected places, hoping that when we eventually do a "v2" of the virtio protocols, we sort that out once and for all using a fixed endian setting for everything. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Alexander Graf <agraf@suse.de> [agraf: keep virtio in libhw and determine endianness through a helper function in exec.c] Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
2012-01-13tcg-arm: fix a typo in commentsAurelien Jarno1-1/+1
ARM still doesn't support 16GB buffers in 32-bit modes, replace the 16GB by 16MB in the comment. Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
2012-01-04Remove IO_MEM_SHIFTAvi Kivity1-18/+14
We no longer use any of the lower bits of a ram_addr, so we might as well use them for the io table index. This increases the number of potential I/O handlers by a factor of 8. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04Drop IO_MEM_ROMDAvi Kivity1-8/+12
Unlike ->readonly, ->readable is not inherited from aliase, so we can simply query the memory region. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04Remove IO_MEM_SUBPAGEAvi Kivity1-5/+5
Replace with a MemoryRegion flag. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04Direct dispatch through MemoryRegionAvi Kivity1-30/+10
Now that all mmio goes through MemoryRegions, we can convert io_mem_opaque to be a MemoryRegion pointer, and remove the thunks that convert from old-style CPU{Read,Write}MemoryFunc to MemoryRegionOps. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04Convert io_mem_watch to be a MemoryRegionAvi Kivity1-47/+26
Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04Convert IO_MEM_SUBPAGE_RAM to be a MemoryRegionAvi Kivity1-48/+24
Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04Convert the subpage wrapper to be a MemoryRegionAvi Kivity1-52/+18
Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04Switch cpu_register_physical_memory_log() to use MemoryRegionsAvi Kivity1-5/+19
Still internally using ram_addr. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04Convert IO_MEM_{RAM,ROM,UNASSIGNED,NOTDIRTY} to MemoryRegionsAvi Kivity1-134/+84
Convert the fixed-address IO_MEM_RAM, IO_MEM_ROM, IO_MEM_UNASSIGNED, and IO_MEM_NOTDIRTY io handlers to MemoryRegions. These aren't real regions, since they are never added to the memory hierarchy, but they allow reuse of the dispatch functionality. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04Uninline get_page_addr_code()Avi Kivity1-0/+26
Its use of IO_MEM_ROM and friends will later cause #include loops; and it is too large to merit inlining. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04Avoid range comparisons on io index typesAvi Kivity1-17/+20
The code sometimes uses range comparisons on io indexes (e.g. index =< IO_MEM_ROM). Avoid these as they make moving to objects harder. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04Fix wrong region_offset when overlaying a page with anotherAvi Kivity1-0/+1
cpu_register_physical_memory_log() does not update region_offset if a page was previously registered for the same address. This could cause mmio accesses going to the wrong place, by using the old region_offset. Signed-off-by: Avi Kivity <avi@redhat.com> Acked-by: Andreas Färber <afaerber@suse.de> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04memory: move mmio access to functionsAvi Kivity1-27/+27
Currently mmio access goes directly to the io_mem_{read,write} arrays. In preparation for eliminating them, add indirection via a function. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04exec: make phys_page_find() return a temporaryAvi Kivity1-100/+48
Instead of returning a PhysPageDesc pointer, return a temporary. This lets us move away from actually storing PhysPageDesc's, and instead sythesising them when needed. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04memory: move endianness compensation to memory coreAvi Kivity1-133/+9
Instead of doing device endianness compensation in cpu_register_io_memory(), do it in the memory core. Signed-off-by: Avi Kivity <avi@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
2012-01-04memory: obsolete cpu_physical_memory_[gs]et_dirty_tracking()Avi Kivity1-10/+0
The getter is no longer used, so it is completely removed. Reviewed-by: Anthony Liguori <aliguori@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-04Store MemoryRegion in RAMBlockAvi Kivity1-0/+1
As a step in moving live migration from RAMBlocks to MemoryRegions, store the MemoryRegion in a RAMBlock. Reviewed-by: Anthony Liguori <aliguori@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-04vmstate, memory: decouple vmstate from memory APIAvi Kivity1-9/+22
Currently creating a memory region automatically registers it for live migration. This differs from other state (which is enumerated in a VMStateDescription structure) and ties the live migration code into the memory core. Decouple the two by introducing a separate API, vmstate_register_ram(), for registering a RAM block for migration. Currently the same implementation is reused, but later it can be moved into a separate list, and registrations can be moved to VMStateDescription blocks. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-03Remove cpu_get_physical_page_desc()Avi Kivity1-11/+0
No longer used. Signed-off-by: Avi Kivity <avi@redhat.com>
2012-01-03memory: remove CPUPhysMemoryClientAvi Kivity1-164/+5
No longer used. Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-20memory: add API for observing updates to the physical memory mapAvi Kivity1-0/+5
Add an API that allows a client to observe changes in the global memory map: - region added (possibly with logging enabled) - region removed (possibly with logging enabled) - logging started on a region - logging stopped on a region - global logging started - global logging removed This API will eventually replace cpu_register_physical_memory_client(). Signed-off-by: Avi Kivity <avi@redhat.com>