Hello, I have a virtio disk for a vm that consists of an lvm. I would like to port this vm disk to another virtualization host that has vm disks on file, instead.
Can I port this lvm vm-disk to file, for example with something like:
dd if=/dev/vgname/lvname of=/directory_tree/filename
and then copy "filename" to the other host and use it as a backing storage for a vm on it? Do I have perhaps to consider some heading for lvm metadata and skip anything form the dd command? Anyone already done/considered?
Thanks, Gianluca
ons 2010-07-28 klockan 17:48 +0200 skrev Gianluca Cecchi:
Can I port this lvm vm-disk to file, for example with something like:
dd if=/dev/vgname/lvname of=/directory_tree/filename
Yes.
Do I have perhaps to consider some heading for lvm metadata and skip anything form the dd command?
Nope, the LVM metadata is not visible in the LV's block device.
/Alexander
On Wed, Jul 28, 2010 at 05:48:37PM +0200, Gianluca Cecchi wrote:
Can I port this lvm vm-disk to file, for example with something like: dd if=/dev/vgname/lvname of=/directory_tree/filename and then copy "filename" to the other host and use it as a backing storage for a vm on it?
Yes.
You can also do it in one step:
dd if=/dev/vgname/lvname bs=1024 | ssh username@otherserver.fqdn "dd of=/directory_tree/filename"
You might want to add -c to the ssh cmd to compress the data if the connection between the servers is < 100 MBit/s.
On Wed, Jul 28, 2010 at 07:39:30PM +0200, Sven Lankes wrote:
You might want to add -c to the ssh cmd to compress the data if the connection between the servers is < 100 MBit/s.
When writing the original virt-p2v we found the -C option actually makes things a lot slower, assuming the common case where you have a gigabit network between the servers. It's quicker just to copy without the overhead of compression.
Rich.
On 08/01/2010 11:41 PM, Richard W.M. Jones wrote:
On Wed, Jul 28, 2010 at 07:39:30PM +0200, Sven Lankes wrote:
You might want to add -c to the ssh cmd to compress the data if the connection between the servers is< 100 MBit/s.
When writing the original virt-p2v we found the -C option actually makes things a lot slower, assuming the common case where you have a gigabit network between the servers. It's quicker just to copy without the overhead of compression.
Interesting, I'd have guessed that encryption will dominate the cpu cost, and that compression would be a win since there's less to encrypt and transmit.
On Sun, Aug 22, 2010 at 07:28:50PM +0300, Avi Kivity wrote:
On 08/01/2010 11:41 PM, Richard W.M. Jones wrote:
On Wed, Jul 28, 2010 at 07:39:30PM +0200, Sven Lankes wrote:
You might want to add -c to the ssh cmd to compress the data if the connection between the servers is< 100 MBit/s.
When writing the original virt-p2v we found the -C option actually makes things a lot slower, assuming the common case where you have a gigabit network between the servers. It's quicker just to copy without the overhead of compression.
Interesting, I'd have guessed that encryption will dominate the cpu cost, and that compression would be a win since there's less to encrypt and transmit.
Maybe my explanation is wrong too. virt-p2v was definitely much slower when we added the '-C' option. However read on.
I just ran a test again on my local LAN. This is between two approximately equal Fedora machines, over a moderate quality consumer gigabit ethernet switch. The command approximates what virt-p2v does: sending 1MB blocks from local /dev device, and at the target end using cat to write to a file.
$ time sh -c 'dd bs=1M if=/dev/vg_trick/Windows7x64 | ssh amd "cat > /tmp/copy1"' 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB) copied, 1473.26 s, 11.7 MB/s
real 24m33.269s user 4m16.944s sys 4m43.181s
$ time sh -c 'dd bs=1M if=/dev/vg_trick/Windows7x64 | ssh -C amd "cat > /tmp/copy2"' 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB) copied, 1412.7 s, 12.2 MB/s
real 23m32.736s user 17m52.739s sys 5m0.884s
In summary:
Copy rate (no compression): 11.7 MB/s Copy rate (with compression): 12.2 MB/s
So now compression is (slightly) faster. YMMV.
Rich.
On 08/22/2010 10:02 PM, Richard W.M. Jones wrote:
Interesting, I'd have guessed that encryption will dominate the cpu cost, and that compression would be a win since there's less to encrypt and transmit.
Maybe my explanation is wrong too. virt-p2v was definitely much slower when we added the '-C' option. However read on.
I just ran a test again on my local LAN. This is between two approximately equal Fedora machines, over a moderate quality consumer gigabit ethernet switch. The command approximates what virt-p2v does: sending 1MB blocks from local /dev device, and at the target end using cat to write to a file.
$ time sh -c 'dd bs=1M if=/dev/vg_trick/Windows7x64 | ssh amd "cat> /tmp/copy1"' 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB) copied, 1473.26 s, 11.7 MB/s
real 24m33.269s user 4m16.944s sys 4m43.181s
11.7 MB/s = 93.6 Mb/s. Not the cpu is not loaded. Are you sure you're using 1GbE here?
$ time sh -c 'dd bs=1M if=/dev/vg_trick/Windows7x64 | ssh -C amd "cat> /tmp/copy2"' 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB) copied, 1412.7 s, 12.2 MB/s
real 23m32.736s user 17m52.739s sys 5m0.884s
Suddenly you're cpu bound. So it looks like compression is really expensive for some reason.
On Sun, Aug 22, 2010 at 10:31:55PM +0300, Avi Kivity wrote:
On 08/22/2010 10:02 PM, Richard W.M. Jones wrote:
Interesting, I'd have guessed that encryption will dominate the cpu cost, and that compression would be a win since there's less to encrypt and transmit.
Maybe my explanation is wrong too. virt-p2v was definitely much slower when we added the '-C' option. However read on.
I just ran a test again on my local LAN. This is between two approximately equal Fedora machines, over a moderate quality consumer gigabit ethernet switch. The command approximates what virt-p2v does: sending 1MB blocks from local /dev device, and at the target end using cat to write to a file.
$ time sh -c 'dd bs=1M if=/dev/vg_trick/Windows7x64 | ssh amd "cat> /tmp/copy1"' 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB) copied, 1473.26 s, 11.7 MB/s
real 24m33.269s user 4m16.944s sys 4m43.181s
11.7 MB/s = 93.6 Mb/s. Not the cpu is not loaded. Are you sure you're using 1GbE here?
You're absolutely right -- I forget that it's a fast ethernet switch :-)
$ time sh -c 'dd bs=1M if=/dev/vg_trick/Windows7x64 | ssh -C amd "cat> /tmp/copy2"' 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB) copied, 1412.7 s, 12.2 MB/s
real 23m32.736s user 17m52.739s sys 5m0.884s
Suddenly you're cpu bound. So it looks like compression is really expensive for some reason.
Rich.
On Aug 22, 2010, at 12:02, "Richard W.M. Jones" rjones@redhat.com wrote:
Doing gzip in a separate process in the "pipeline" rather than with -C you will have separate CPUs doing compression and encryption.