summaryrefslogtreecommitdiff
path: root/hw/vga_int.h
diff options
context:
space:
mode:
authorAnthony Liguori <aliguori@us.ibm.com>2009-12-18 08:08:09 +1000
committerAnthony Liguori <aliguori@us.ibm.com>2009-12-18 11:26:32 -0600
commita6109ff1b5d7184a9d490c4ff94f175940232ebd (patch)
tree6645d11e5d3507bc8a891df8d755c9af70cd6b0a /hw/vga_int.h
parentee3e41a9a0194af21d0da75f5afd87bea3738cf3 (diff)
downloadqemu-a6109ff1b5d7184a9d490c4ff94f175940232ebd.tar.gz
Fix VMware VGA depth computation
VMware VGA requires that the depth presented to the guest is the same as the DisplaySurface that it renders to. This is because it performs a very simple memcpy() to blit from one surface to another. We currently hardcode a 24-bit depth. The surface allocator for SDL may, and usually will, allocate a surface with a different depth causing screen corruption. This changes the code to allocate the DisplaySurface before initializing the device which allows the depth of the DisplaySurface to be used instead of hardcoding something. Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Diffstat (limited to 'hw/vga_int.h')
0 files changed, 0 insertions, 0 deletions