summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2015-06-09QemuOpts: Convert qemu_opts_foreach() to ErrorMarkus Armbruster12-54/+72
Retain the function value for now, to permit selective conversion of its callers. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Acked-by: Kevin Wolf <kwolf@redhat.com>
2015-06-08QemuOpts: Drop qemu_opts_foreach() parameter abort_on_failureMarkus Armbruster10-37/+44
When the argument is non-zero, qemu_opts_foreach() stops on callback returning non-zero, and returns that value. When the argument is zero, it doesn't stop, and returns the bit-wise inclusive or of all the return values. Funky :) The callers that pass zero could just as well pass one, because their callbacks can't return anything but zero: * qemu_add_globals()'s callback qdev_add_one_global() * qemu_config_write()'s callback config_write_opts() * main()'s callbacks default_driver_check(), drive_enable_snapshot(), vnc_init_func() Drop the parameter, and always stop. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Acked-by: Kevin Wolf <kwolf@redhat.com>
2015-06-08vl: Fail right after first bad -objectMarkus Armbruster1-1/+1
Failure to create an object with -object is a fatal error. However, we delay the actual exit until all -object are processed. On the one hand, this permits detection of genuine additional errors. On the other hand, it can muddy the waters with uninteresting additional errors, e.g. when a later -object tries to reference a prior one that failed. We generally stop right on the first bad option, so do that for -object as well. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2015-06-08vl: Print -device help at most onceMarkus Armbruster1-1/+1
We print it once for each -device help. Not helpful. Stop after the first one. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2015-06-08vl: Report failure to sandbox at most onceMarkus Armbruster1-1/+1
It's reported once per -sandbox on. Stop on the first failure, like we do for other options. Not fixed: "-sandbox on -sandbox off" should leave the sandbox off. It doesn't. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2015-06-08Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell75-1057/+1562
* KVM error improvement from Laurent * CONFIG_PARALLEL fix from Mirek * Atomic/optimized dirty bitmap access from myself and Stefan * BUILD_DIR convenience/bugfix from Peter C * Memory leak fix from Shannon * SMM improvements (though still TCG only) from myself and Gerd, acked by mst # gpg: Signature made Fri Jun 5 18:45:20 2015 BST using RSA key ID 78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (62 commits) update Linux headers from kvm/next atomics: add explicit compiler fence in __atomic memory barriers ich9: implement SMI_LOCK q35: implement TSEG q35: add test for SMRAM.D_LCK q35: implement SMRAM.D_LCK q35: add config space wmask for SMRAM and ESMRAMC q35: fix ESMRAMC default q35: implement high SMRAM hw/i386: remove smram_update target-i386: use memory API to implement SMRAM hw/i386: add a separate region that tracks the SMRAME bit target-i386: create a separate AddressSpace for each CPU vl: run "late" notifiers immediately qom: add object_property_add_const_link vl: allow full-blown QemuOpts syntax for -global pflash_cfi01: add secure property pflash_cfi01: change to new-style MMIO accessors pflash_cfi01: change big-endian property to BIT type target-i386: wake up processors that receive an SMI ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-06-08Merge remote-tracking branch 'remotes/jnsnow/tags/ide-pull-request' into stagingPeter Maydell4-247/+358
# gpg: Signature made Fri Jun 5 20:59:07 2015 BST using RSA key ID AAFC390E # gpg: Good signature from "John Snow (John Huston) <jsnow@redhat.com>" # gpg: WARNING: This key is not certified with sufficiently trusted signatures! # gpg: It is not certain that the signature belongs to the owner. # Primary key fingerprint: FAEB 9711 A12C F475 812F 18F2 88A9 064D 1835 61EB # Subkey fingerprint: F9B7 ABDB BCAC DF95 BE76 CBD0 7DEF 8106 AAFC 390E * remotes/jnsnow/tags/ide-pull-request: macio: remove remainder_len DBDMA_io property macio: update comment/constants to reflect the new code macio: switch pmac_dma_write() over to new offset/len implementation macio: switch pmac_dma_read() over to new offset/len implementation fdc-test: Test state for existing cases more thoroughly fdc: Fix MSR.RQM flag fdc: Disentangle phases in fdctrl_read_data() fdc: Code cleanup in fdctrl_write_data() fdc: Use phase in fdctrl_write_data() fdc: Introduce fdctrl->phase fdc: Rename fdctrl_set_fifo() to fdctrl_to_result_phase() fdc: Rename fdctrl_reset_fifo() to fdctrl_to_command_phase() Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-06-08machine: Drop use of DEFAULT_RAM_SIZE in help textAlexander Graf1-2/+1
As of commit 076b35b5a (machine: add default_ram_size to machine class) we no longer have a global default ram size, but instead machine specific defaults. When invoking qemu --help we don't know which machine you selected, so we can't tell the user the default RAM size in the help text anymore now. Thus I don't see an easy way to expose the default ram size to the user in the help text. The easiest option IMHO is to just drop this piece of information. Reported-by: Laurent Desnogues <laurent.desnogues@gmail.com> Signed-off-by: Alexander Graf <agraf@suse.de> Acked-by: Laurent Desnogues <laurent.desnogues@gmail.com> Acked-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com> Message-id: 1433495103-62084-1-git-send-email-agraf@suse.de [PMM: rewrapped long commit message lines] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-06-08monitor: Fix QMP ABI breakage around "id"Markus Armbruster1-0/+2
Commit 65207c5 accidentally dropped a line of code we need along with a comment that became wrong then. This made QMP reject "id": {"execute": "system_reset", "id": "1"} {"error": {"class": "GenericError", "desc": "QMP input object member 'id' is unexpected"}} Put the lost line right back, so QMP again accepts and returns "id", as promised by the ABI: {"execute": "system_reset", "id": "1"} {"return": {}, "id": "1"} Reported-by: Fabio Fantoni <fabio.fantoni@m2r.biz> Signed-off-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Don Slutz <dslutz@verizon.com> Tested-by: Fabio Fantoni <fabio.fantoni@m2r.biz> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com> Signed-off-by: Pavel Fedin <p.fedin@samsung.com> Tested-by: Eric Blake <eblake@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-id: 1433753070-12632-2-git-send-email-armbru@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-06-05update Linux headers from kvm/nextPaolo Bonzini3-3/+20
This is kvm.git commit 05ff30bb56c6b3d3000519d6e02ed35678ddae3b. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05atomics: add explicit compiler fence in __atomic memory barriersPaolo Bonzini1-3/+9
__atomic_thread_fence does not include a compiler barrier; in the C++11 memory model, fences take effect in combination with other atomic operations. GCC implements this by making __atomic_load and __atomic_store access memory as if the pointer was volatile, and leaves no trace whatsoever of acquire and release fences in the compiler's intermediate representation. In QEMU, we want memory barriers to act on all memory, but at the same time we would like to use __atomic_thread_fence for portability reasons. Add compiler barriers manually around the __atomic_thread_fence. Message-Id: <1433334080-14912-1-git-send-email-pbonzini@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05ich9: implement SMI_LOCKGerd Hoffmann4-1/+29
Add write mask for the smi enable register, so we can disable write access to certain bits. Open all bits on reset. Disable write access to GBL_SMI_EN when SMI_LOCK (in ich9 lpc pci config space) is set. Write access to SMI_LOCK itself is disabled too. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05q35: implement TSEGGerd Hoffmann2-0/+71
TSEG provides larger amounts of SMRAM than the 128 KB available with legacy SMRAM and high SMRAM. Route access to tseg into nowhere when enabled, for both cpus and busmaster dma, and add tseg window to smram region, so cpus can access it in smm mode. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05q35: add test for SMRAM.D_LCKGerd Hoffmann2-0/+94
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> [Fix compilation of the newly introduced test. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05q35: implement SMRAM.D_LCKGerd Hoffmann2-1/+10
Once the SMRAM.D_LCK bit has been set by the guest several bits in SMRAM and ESMRAMC become readonly until the next machine reset. Implement this by updating the wmask accordingly when the guest sets the lock bit. As the lock it itself is locked down too we don't need to worry about the guest clearing the lock bit. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05q35: add config space wmask for SMRAM and ESMRAMCGerd Hoffmann2-0/+11
Not all bits in SMRAM and ESMRAMC can be changed by the guest. Add wmask defines accordingly and set them in mch_reset(). Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05q35: fix ESMRAMC defaultGerd Hoffmann2-1/+7
The cache bits in ESMRAMC are hardcoded to 1 (=disabled) according to the q35 mch specs. Add and use a define with this default. While being at it also update the SMRAM default to use the name (no code change, just makes things a bit more readable). Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05q35: implement high SMRAMPaolo Bonzini2-12/+39
When H_SMRAME is 1, low memory at 0xa0000 is left alone by SMM, and instead the chipset maps the 0xa0000-0xbffff window at 0xfeda0000-0xfedbffff. This affects both the "non-SMM" view controlled by D_OPEN and the SMM view controlled by G_SMRAME, so add two new MemoryRegions and toggle the enabled/disabled state of all four in mch_update_smram. Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05hw/i386: remove smram_updatePaolo Bonzini4-9/+4
It's easier to inline it now that most of its work is done by the CPU (rather than the chipset) through /machine/smram and the memory API. Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05target-i386: use memory API to implement SMRAMPaolo Bonzini14-93/+68
Remove cpu_smm_register and cpu_smm_update. Instead, each CPU address space gets an extra region which is an alias of /machine/smram. This extra region is enabled or disabled as the CPU enters/exits SMM. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05hw/i386: add a separate region that tracks the SMRAME bitPaolo Bonzini3-3/+32
This region is exported at /machine/smram. It is "empty" if SMRAME=0 and points to SMRAM if SMRAME=1. The CPU will enable/disable it as it enters or exits SMRAM. While touching nearby code, the existing memory region setup was slightly inconsistent. The smram_region is *disabled* in order to open SMRAM (because the smram_region shows the low VRAM instead of the RAM at 0xa0000). Because SMRAM is closed at startup, the smram_region must be enabled when creating the i440fx or q35 devices. Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05target-i386: create a separate AddressSpace for each CPUPaolo Bonzini2-0/+15
Different CPUs can be in SMM or not at the same time, thus they will see different things where the chipset places SMRAM. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05vl: run "late" notifiers immediatelyPaolo Bonzini1-0/+6
If a machine_init_done notifier is added late, as part of a hot-plugged device, run it immediately. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05qom: add object_property_add_const_linkPaolo Bonzini2-0/+34
Suggested-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05vl: allow full-blown QemuOpts syntax for -globalPaolo Bonzini2-8/+17
-global does not work for drivers that have a dot in their name, such as cfi.pflash01. This is just a parsing limitation, because such globals can be declared easily inside a -readconfig file. To allow this usage, support the full QemuOpts key/value syntax for -global too, for example "-global driver=cfi.pflash01,property=secure,value=on". The two formats do not conflict, because the key/value syntax does not have a period before the first equal sign. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05pflash_cfi01: add secure propertyPaolo Bonzini1-44/+67
When this property is set, MMIO accesses are only allowed with the MEMTXATTRS_SECURE attribute. This is used for secure access to UEFI variables stored in flash. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05pflash_cfi01: change to new-style MMIO accessorsPaolo Bonzini1-86/+10
This is a required step to implement read_with_attrs and write_with_attrs. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05pflash_cfi01: change big-endian property to BIT typePaolo Bonzini3-6/+9
Make this consistent with the secure property, added in the next patch. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05target-i386: wake up processors that receive an SMIPaolo Bonzini1-1/+3
An SMI should definitely wake up a processor in halted state! This lets OVMF boot with SMM on multiprocessor systems, although it halts very soon after that with a "CpuIndex != BspIndex" assertion failure. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05target-i386: set G=1 in SMM big real mode selectorsPaolo Bonzini1-6/+6
Because the limit field's bits 31:20 is 1, G should be 1. VMX actually enforces this, let's do it for completeness in QEMU as well. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05target-i386: mask NMIs on entry to SMMPaolo Bonzini2-9/+20
QEMU is not blocking NMIs on entry to SMM. Implementing this has to cover a few corner cases, because: - NMIs can then be enabled by an IRET instruction and there is no mechanism to _set_ the "NMIs masked" flag on exit from SMM: "A special case can occur if an SMI handler nests inside an NMI handler and then another NMI occurs. [...] When the processor enters SMM while executing an NMI handler, the processor saves the SMRAM state save map but does not save the attribute to keep NMI interrupts disabled. - However, there is some hidden state, because "If NMIs were blocked before the SMI occurred [and no IRET is executed while in SMM], they are blocked after execution of RSM." This is represented by the new HF2_SMM_INSIDE_NMI_MASK bit. If it is zero, NMIs are _unblocked_ on exit from RSM. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05target-i386: Use correct memory attributes for ioport accessesPaolo Bonzini5-87/+58
In order to do this, stop using the cpu_in*/out* helpers, and instead access address_space_io directly. cpu_in* and cpu_out* remain for usage in the monitor, in qtest, and in Xen. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05target-i386: Use correct memory attributes for memory accessesPaolo Bonzini5-290/+394
These include page table walks, SVM accesses and SMM state save accesses. The bulk of the patch is obtained with sed -i 's/\(\<[a-z_]*_phys\(_notdirty\)\?\>(cs\)->as,/x86_\1,/' Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05target-i386: introduce cpu_get_mem_attrsPaolo Bonzini4-3/+11
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05icount: print a warning if there is no more deadline in sleep=no modeVictor CLEMENT1-0/+5
While qemu is running in sleep=no mode, a warning will be printed when no timer deadline is set. As this mode is intended for getting deterministic virtual time, if no timer is set on the virtual clock this determinism is broken. Signed-off-by: Victor CLEMENT <victor.clement@openwide.fr> Message-Id: <1432912446-9811-4-git-send-email-victor.clement@openwide.fr> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05icount: add sleep parameter to the icount option to set icount_sleep modeVictor CLEMENT3-2/+22
The 'sleep' parameter sets the icount_sleep mode, which is enabled by default. To disable it, add the 'sleep=no' parameter (or 'nosleep') to the qemu -icount option. Signed-off-by: Victor CLEMENT <victor.clement@openwide.fr> Message-Id: <1432912446-9811-3-git-send-email-victor.clement@openwide.fr> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05icount: implement a new icount_sleep mode toggleing real-time cpu sleepVictor CLEMENT1-26/+44
When the icount_sleep mode is disabled, the QEMU_VIRTUAL_CLOCK runs at the maximum possible speed by warping the sleep times of the virtual cpu to the soonest clock deadline. The virtual clock will be updated only according the instruction counter. Signed-off-by: Victor CLEMENT <victor.clement@openwide.fr> Message-Id: <1432912446-9811-2-git-send-email-victor.clement@openwide.fr> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05memory: use mr->ram_addr in "is this RAM?" assertionsPaolo Bonzini1-8/+10
mr->terminates alone doesn't guarantee that we are looking at a RAM region. mr->ram_addr also has to be checked, in order to distinguish RAM and I/O regions. So, do the following: 1) add a new define RAM_ADDR_INVALID, and test it in the assertions instead of mr->terminates 2) IOMMU regions were not setting mr->ram_addr to a bogus value, initialize it in the instance_init function so that the new assertions would fire for IOMMU regions as well. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05memory: make cpu_physical_memory_sync_dirty_bitmap() fully atomicStefan Hajnoczi1-3/+3
The fast path of cpu_physical_memory_sync_dirty_bitmap() directly manipulates the dirty bitmap. Use atomic_xchg() to make the test-and-clear atomic. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-7-git-send-email-stefanha@redhat.com> [Only do xchg on nonzero words. - Paolo] Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05memory: replace cpu_physical_memory_reset_dirty() with test-and-clearStefan Hajnoczi4-38/+33
The cpu_physical_memory_reset_dirty() function is sometimes used together with cpu_physical_memory_get_dirty(). This is not atomic since two separate accesses to the dirty memory bitmap are made. Turn cpu_physical_memory_reset_dirty() and cpu_physical_memory_clear_dirty_range_type() into the atomic cpu_physical_memory_test_and_clear_dirty(). Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-6-git-send-email-stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05migration: move dirty bitmap sync to ram_addr.hStefan Hajnoczi2-44/+46
The dirty memory bitmap is managed by ram_addr.h and copied to migration_bitmap[] periodically during live migration. Move the code to sync the bitmap to ram_addr.h where related code lives. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-5-git-send-email-stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05memory: use atomic ops for setting dirty memory bitsStefan Hajnoczi1-7/+9
Use set_bit_atomic() and bitmap_set_atomic() so that multiple threads can dirty memory without race conditions. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-4-git-send-email-stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05bitmap: add atomic test and clearStefan Hajnoczi2-0/+47
The new bitmap_test_and_clear_atomic() function clears a range and returns whether or not the bits were set. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-3-git-send-email-stefanha@redhat.com> [Test before xchg; then a full barrier is needed at the end just like in the previous patch. The barrier can be avoided if we did at least one xchg. - Paolo] Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05bitmap: add atomic set functionsStefan Hajnoczi3-0/+54
Use atomic_or() for atomic bitmaps where several threads may set bits at the same time. This avoids the race condition between threads loading an element, bitwise ORing, and then storing the element. When setting all bits in a word we can avoid atomic ops and instead just use an smp_mb() at the end. Most bitmap users don't need atomicity so introduce new functions. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-2-git-send-email-stefanha@redhat.com> [Avoid barrier in the single word case, use full barrier instead of write. - Paolo] Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05memory: do not touch code dirty bitmap unless TCG is enabledPaolo Bonzini1-3/+5
cpu_physical_memory_set_dirty_lebitmap unconditionally syncs the DIRTY_MEMORY_CODE bitmap. This however is unused unless TCG is enabled. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05exec: only check relevant bitmaps for cleanlinessPaolo Bonzini2-16/+31
Most of the time, not all bitmaps have to be marked as dirty; do not do anything if the interesting ones are already dirty. Previously, any clean bitmap would have cause all the bitmaps to be marked dirty. In fact, unless running TCG most of the time bitmap operations need not be done at all, because memory_region_is_logging returns zero. In this case, skip the call to cpu_physical_memory_range_includes_clean altogether as well. With this patch, cpu_physical_memory_set_dirty_range is called unconditionally, so there need not be anymore a separate call to xen_modified_memory. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05exec: invert return value of cpu_physical_memory_get_clean, renamePaolo Bonzini1-5/+5
While it is obvious that cpu_physical_memory_get_dirty returns true even if a single page is dirty, the same is not true for cpu_physical_memory_get_clean; one would expect that it returns true only if all the pages are clean, but it actually looks for even one clean page. (By contrast, the caller of that function, cpu_physical_memory_range_includes_clean, has a good name). To clarify, rename the function to cpu_physical_memory_all_dirty and return true if _all_ the pages are dirty. This is the opposite of the previous meaning, because "all are 1" is the same as "not (any is 0)", so we have to modify cpu_physical_memory_range_includes_clean as well. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05exec: pass client mask to cpu_physical_memory_set_dirty_rangePaolo Bonzini3-27/+29
This cuts in half the cost of bitmap operations (which will become more expensive when made atomic) during migration on non-VRAM regions. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05translate-all: make less of tb_invalidate_phys_page_range depend on ↵Paolo Bonzini1-9/+6
is_cpu_write_access is_cpu_write_access is only set if tb_invalidate_phys_page_range is called from tb_invalidate_phys_page_fast, and hence from notdirty_mem_write. However: - the code bitmap can be built directly in tb_invalidate_phys_page_fast (unconditionally, since is_cpu_write_access would always be passed as 1); - the virtual address is not needed to mark the page as "not containing code" (dirty code bitmap = 1), so we can also remove that use of is_cpu_write_access. For calls of tb_invalidate_phys_page_range that do not come from notdirty_mem_write, the next call to notdirty_mem_write will notice that the page does not contain code anymore, and will fix up the TLB entry. The parameter needs to remain in order to guard accesses to cpu->mem_io_pc. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-06-05cputlb: remove useless arguments to tlb_unprotect_code_phys, renamePaolo Bonzini3-5/+3
These days modification of the TLB is done in notdirty_mem_write, so the virtual address and env pointer as unnecessary. The new name of the function, tlb_unprotect_code, is consistent with tlb_protect_code. Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>