summaryrefslogtreecommitdiff
path: root/async.c
diff options
context:
space:
mode:
authorAurelien Jarno <aurelien@aurel32.net>2015-07-09 20:39:57 +0200
committerRichard Henderson <rth@twiddle.net>2015-08-24 11:10:54 -0700
commit8cc580f6a0d8c0e2f590c1472cf5cd8e51761760 (patch)
treefd167676e1efba2d1fa1c202f2623d7db1b1854c /async.c
parentecc7b3aa71f5fdcf9ee87e74ca811d988282641d (diff)
downloadqemu-8cc580f6a0d8c0e2f590c1472cf5cd8e51761760.tar.gz
tcg/i386: use softmmu fast path for unaligned accesses
Softmmu unaligned load/stores currently goes through through the slow path for two reasons: - to support unaligned access on host with strict alignement - to correctly handle accesses crossing pages x86 is only concerned by the second reason. Unaligned accesses are avoided by compilers, but are not uncommon. We therefore would like to see them going through the fast path, if they don't cross pages. For that we can use the fact that two adjacent TLB entries can't contain the same page. Therefore accessing the TLB entry corresponding to the first byte, but comparing its content to page address of the last byte ensures that we don't cross pages. We can do this check without adding more instructions in the TLB code (but increasing its length by one byte) by using the LEA instruction to combine the existing move with the size addition. On an x86-64 host, this gives a 3% boot time improvement for a powerpc guest and 4% for an x86-64 guest. [rth: Tidied calculation of the offset mask] Signed-off-by: Aurelien Jarno <aurelien@aurel32.net> Message-Id: <1436467197-2183-1-git-send-email-aurelien@aurel32.net> Signed-off-by: Richard Henderson <rth@twiddle.net>
Diffstat (limited to 'async.c')
0 files changed, 0 insertions, 0 deletions