summaryrefslogtreecommitdiff
path: root/include/migration
AgeCommit message (Collapse)AuthorFilesLines
2013-04-15savevm: Implement block_writev_buffer()Kevin Wolf1-1/+1
Instead of breaking up RAM state into many small chunks, pass the iovec to the block layer for better performance. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2013-04-05vmstate: Add support for two dimensional arraysPeter Maydell1-0/+27
Add support for migrating two dimensional arrays, by defining a set of new macros VMSTATE_*_2DARRAY paralleling the existing VMSTATE_*_ARRAY macros. 2D arrays are handled the same for actual state serialization; the only difference is that the type check has to change for a 2D array. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Igor Mitsyanko <i.mitsyanko@gmail.com> Message-id: 1363975375-3166-2-git-send-email-peter.maydell@linaro.org
2013-04-05vmstate.h: introduce VMSTATE_BUFFER_POINTER_UNSAFE macroIgor Mitsyanko1-0/+9
Macro could be used to migrate a dynamically allocated buffer of known size. Signed-off-by: Igor Mitsyanko <i.mitsyanko@gmail.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1362923278-4080-2-git-send-email-i.mitsyanko@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2013-03-26Add qemu_put_buffer_asyncOrit Wasserman1-0/+5
This allows us to add a buffer to the iovec to send without copying it into the static buffer, the buffer will be sent later when qemu_fflush is called. Signed-off-by: Orit Wasserman <owasserm@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26Add QemuFileWritevBuffer QemuFileOpsOrit Wasserman1-0/+7
This will allow us to write an iovec Signed-off-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26migration: do not sent zero pages in bulk stagePeter Lieven1-0/+2
during bulk stage of ram migration if a page is a zero page do not send it at all. the memory at the destination reads as zero anyway. even if there is an madvise with QEMU_MADV_DONTNEED at the target upon receipt of a zero page I have observed that the target starts swapping if the memory is overcommitted. it seems that the pages are dropped asynchronously. this patch also updates QMP to return the number of skipped pages in MigrationStats. Signed-off-by: Peter Lieven <pl@kamp.de> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26savevm: Fix bugs in the VMSTATE_VBUFFER_MULTIPLY definitionDavid Gibson1-2/+2
The VMSTATE_BUFFER_MULTIPLY macro is misnamed - it actually specifies a variably sized buffer with VMS_VBUFFER, so should be named VMSTATE_VBUFFER_MULTIPLY. This patch fixes this (the macro had no current users under either name). In addition, unlike the other VMSTATE_VBUFFER variants, this macro did not specify VMS_POINTER. This patch fixes this bug as well. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26savevm: Add VMSTATE_STRUCT_VARRAY_POINTER_UINT32David Gibson1-0/+10
Currently the savevm code contains a VMSTATE_STRUCT_VARRAY_POINTER_INT32 helper (a variably sized array with the number of elements in an int32_t), but not VMSTATE_STRUCT_VARRAY_POINTER_UINT32 (... with the number of elements in a uint32_t). This patch (trivially) fixes the deficiency. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26savevm: Add VMSTATE_FLOAT64 helpersDavid Gibson1-0/+15
The current savevm code includes VMSTATE helpers for a number of commonly used data types, but not for the float64 type used by the internal floating point emulation code. This patch fixes the deficiency. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26savevm: Add VMSTATE_UINTTL_EQUAL helperDavid Gibson1-2/+5
This adds an _EQUAL VMSTATE helper for target_ulongs, defined in terms of VMSTATE_UINT32_EQUAL or VMSTATE_UINT64_EQUAL as appropriate. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-26savevm: Add VMSTATE_UINT64_EQUAL helpersDavid Gibson1-0/+7
The savevm code already includes a number of *_EQUAL helpers which act as sanity checks verifying that the configuration of the saved state matches that of the machine we're loading into to work. Variants already exist for 8 bit 16 bit and 32 bit integers, but not 64 bit integers. This patch fills that hole, adding a UINT64 version. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-12stubs: Add a vmstate_dummy struct for CONFIG_USER_ONLYAndreas Färber1-0/+4
Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-12vmstate: Make vmstate_register() static inlineAndreas Färber1-2/+10
This avoids adding a duplicate stub for CONFIG_USER_ONLY. Suggested-by: Eduardo Habkost <ehabkost@redhat.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-03-11page_cache: dup memory on insertPeter Lieven1-1/+2
The page cache frees all data on finish, on resize and if there is collision on insert. So it should be the caches responsibility to dup the data that is stored in the cache. Signed-off-by: Peter Lieven <pl@kamp.de> Signed-off-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11migration: eliminate s->migration_filePaolo Bonzini1-2/+0
The indirection is useless now. Backends can open s->file directly. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11migration: move rate limiting to QEMUFilePaolo Bonzini1-16/+2
Rate limiting is now simply a byte counter; client call qemu_file_rate_limit() manually to determine if they have to exit. So it is possible and simple to move the functionality to QEMUFile. This makes the remaining functionality of s->file redundant; in the next patch we can remove it and write directly to s->migration_file. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11migration: use QEMUFile for writing outgoing migration dataPaolo Bonzini1-4/+0
Second, drop the file descriptor indirection, and write directly to the QEMUFile. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11migration: use QEMUFile for migration channel lifetimePaolo Bonzini1-3/+4
As a start, use QEMUFile to store the destination and close it. qemu_get_fd gets a file descriptor that will be used by the write callbacks. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11qemu-file: add writable socket QEMUFilePaolo Bonzini1-1/+1
Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11migration: merge qemu_popen_cmd with qemu_popenPaolo Bonzini1-1/+0
There is no reason for outgoing exec migration to do popen manually anymore (the reason used to be that we needed the FILE* to make it non-blocking). Use qemu_popen_cmd. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11qemu-file: make qemu_fflush and qemu_file_set_error private againPaolo Bonzini1-2/+0
Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11migration: yay, buffering is gonePaolo Bonzini1-3/+0
Buffering was needed because blocking writes could take a long time and starve other threads seeking to grab the big QEMU mutex. Now that all writes (except within _complete callbacks) are done outside the big QEMU mutex, we do not need buffering at all. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11migration: run setup callbacks out of big lockPaolo Bonzini1-1/+1
Only the migration_bitmap_sync() call needs the iothread lock. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11migration: run pending/iterate callbacks out of big lockPaolo Bonzini1-0/+11
This makes it possible to do blocking writes directly to the socket, with no buffer in the middle. For RAM, only the migration_bitmap_sync() call needs the iothread lock. For block migration, it is needed by the block layer (including bdrv_drain_all and dirty bitmap access), but because some code is shared between iterate and complete, all of mig_save_device_dirty is run with the lock taken. In the savevm case, the iterate callback runs within the big lock. This is annoying because it complicates the rules. Luckily we do not need to do anything about it: the RAM iterate callback does not need the iothread lock, and block migration never runs during savevm. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11migration: reorder SaveVMHandlers membersPaolo Bonzini1-3/+5
This groups together the callbacks that later will have similar locking rules. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11migration: cleanup migration (including thread) in the iothreadPaolo Bonzini1-0/+1
Perform final cleanup in a bottom half, and add joining the thread to the series of cleanup actions. migrate_fd_error remains for connection error, but it doesn't need to cleanup anything anymore. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11migration: simplify error handlingPaolo Bonzini1-1/+0
Always use qemu_file_get_error to detect errors, since that is how QEMUFile itself drops I/O after an error occurs. There is no need to propagate and check return values all the time. Also remove the "complete" member, since we know that it is set (via migrate_fd_cleanup) only when the state changes. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-11qemu-file: temporarily expose qemu_file_set_error and qemu_fflushPaolo Bonzini1-0/+2
Right now, migration cannot entirely rely on QEMUFile's automatic drop of I/O after an error, because it does its "real" I/O outside the put_buffer callback. To fix this until buffering is gone, expose qemu_file_set_error which we will use in buffered_flush. Similarly, buffered_flush is not a complete flush because some data may still reside in the QEMUFile's own buffer. This somewhat complicates the process of closing the migration thread. Again, when buffering is gone buffered_flush will disappear and calling qemu_fflush will not be needed; in the meanwhile, we expose the function for use in migration.c. Reviewed-by: Orit Wasserman <owasserm@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2013-03-01hw: move fifo.[ch] to libqemuutilPaolo Bonzini1-0/+2
fifo.c is generic code that can be easily unit tested. So it belongs in libqemuutil. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2013-02-22migration: calculate expected_downtimeJuan Quintela1-0/+1
We removed the calculation in commit e4ed1541ac9413eac494a03532e34beaf8a7d1c5 Now we add it back. We need to create dirty_bytes_rate because we can't include cpu-all.h from migration.c, and there is no other way to include TARGET_PAGE_SIZE. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2013-02-12migration: make qemu_ftell() public and support writable filesStefan Hajnoczi1-0/+1
Migration .save_live_iterate() functions return the number of bytes transferred. The easiest way of doing this is by calling qemu_ftell(f) at the beginning and end of the function to calculate the difference. Make qemu_ftell() public so that block-migration will be able to use it. Also adjust the ftell calculation for writable files where buf_offset does not include buf_size. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Message-id: 1360661835-28663-2-git-send-email-stefanha@redhat.com Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2013-01-17migration: move beginning stage to the migration threadJuan Quintela1-1/+0
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2013-01-17migration: make function staticPaolo Bonzini1-2/+0
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Reviewed-by: Eric Blake <eblake@redhat.com>
2012-12-20migration: merge QEMUFileBuffered into MigrationStateJuan Quintela1-0/+8
Avoid splitting the state of outgoing migration, more or less arbitrarily, between two data structures. QEMUFileBuffered anyway is used only during migration. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20migration: Inline qemu_fopen_ops_buffered into migrate_fd_connectJuan Quintela1-2/+0
Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20migration: move migration_fd_put_ready()Juan Quintela1-1/+0
Put it near its use and un-export it. Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20migration: move buffered_file.c code into migration.cJuan Quintela1-0/+1
This only moves the code (also from buffered_file.h to migration.h). Fix whitespace until checkpatch is happy. Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20savevm: New save live migration method: pendingJuan Quintela2-1/+2
Code just now does (simplified for clarity) if (qemu_savevm_state_iterate(s->file) == 1) { vm_stop_force_state(RUN_STATE_FINISH_MIGRATE); qemu_savevm_state_complete(s->file); } Problem here is that qemu_savevm_state_iterate() returns 1 when it knows that remaining memory to sent takes less than max downtime. But this means that we could end spending 2x max_downtime, one downtime in qemu_savevm_iterate, and the other in qemu_savevm_state_complete. Changed code to: pending_size = qemu_savevm_state_pending(s->file, max_size); DPRINTF("pending size %lu max %lu\n", pending_size, max_size); if (pending_size >= max_size) { ret = qemu_savevm_state_iterate(s->file); } else { vm_stop_force_state(RUN_STATE_FINISH_MIGRATE); qemu_savevm_state_complete(s->file); } So what we do is: at current network speed, we calculate the maximum number of bytes we can sent: max_size. Then we ask every save_live section how much they have pending. If they are less than max_size, we move to complete phase, otherwise we do an iterate one. This makes things much simpler, because now individual sections don't have to caluclate the bandwidth (it was implossible to do right from there). Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-20migration: remove unfreeze logicJuan Quintela1-1/+0
Now that we have a thread, and blocking writes, we don't need it. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-20migration: make writes blockingJuan Quintela1-5/+0
Move all the writes to the migration_thread, and make writings blocking. Notice that are still using the iothread for everything that we do. Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20migration: move migration thread init code to migrate_fd_put_readyJuan Quintela1-0/+1
This way everything related with migration is run on the migration thread and no locking is needed. Signed-off-by: Juan Quintela <quintela@redhat.com>
2012-12-20migration: make qemu_fopen_ops_buffered() return voidJuan Quintela1-0/+1
We want the file assignment to happen before the thread is created to avoid locking, so we just do it before creating the thread. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Orit Wasserman <owasserm@redhat.com>
2012-12-19misc: move include files to include/qemu/Paolo Bonzini1-1/+1
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2012-12-19migration: move include files to include/migration/Paolo Bonzini5-0/+1113
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>