vnstat / network wrong peaks while delete snapshot

Lars Schotte lars.schotte at schotteweb.de
Sun Jun 5 22:09:24 UTC 2011


hm, thats interesting, so the network controller is of a vmware's
origin. try some other os, like for example netbsd with some real
emulated network card, nothing from vmware, and see if the problem
persists there as well. if not, then you know at least that the problem
is not the kernel of the guest or some driver problem, but its vmware's
fault.

On Sun, 05 Jun 2011 23:25:05 +0200
Reindl Harald <h.reindl at thelounge.net> wrote:

> as wrote in my second post yes, because VMwareDataRecovery can
> not work without vmware-tools, the drivers are NOT from
> external packages because they are native supported from
> recent kernels
> 
> PLEASE can we both stop this now?
> 
> it is useless, there is nothing on the side of ESXi / VMware
> i can change, so what are we discuss here?
> 
> 03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev
> 01) 0b:00.0 Serial Attached SCSI controller: VMware PVSCSI SCSI
> Controller (rev 02)
> 
> >>>>>>>>>>>> they will anser "fedora is not official supported and
> >>>>>>>>>>>> open-vm-tools vom rpmfusion too" on ESXi :-(
> 
> Am 05.06.2011 23:17, schrieb Lars Schotte:
> > did you install some vmware software / drivers on the guest?
> > 
> > On Sun, 05 Jun 2011 22:39:30 +0200
> > Reindl Harald <h.reindl at thelounge.net> wrote:
> > 
> >> yes and because you have NO CHANCE to get support
> >> for fedora from VMware the question is if this
> >> trigger can not be corrected somewhere in the guest
> >>
> >> that "16777216.00 TiB" is impossible in some
> >> seconds is clear - so my question was not
> >> to discuss where the problem is, my question
> >> is if it can be pragmatic fixed somewhere
> >>
> >> Am 05.06.2011 22:32, schrieb Lars Schotte:
> >>> well, the guest operating system is measuring traffic that doesnt
> >>> exist. that is a good start.
> >>>
> >>> now, normally, operating systems do NOT measure traffic that
> >>> doesnt exist, so there must be something wrong with the network
> >>> card driver.
> >>>
> >>> guess what.. it is not ... because vmware emulates a network card
> >>> which most operating systems have a (good) driver of. so we can
> >>> rule out an operating system or a network device driver error.
> >>>
> >>> so we should ask vmware why they implemented crazy transfer rate
> >>> emulation on that virtualized device while doing snapshots. which
> >>> of course have nothing to do with the fact that there is a network
> >>> device emulated or used. so for me it looks like vmware did brake
> >>> it intentionally.
> >>>
> >>> why they shoud do sth like that? because they are ... "different".
> >>>
> >>> On Sun, 05 Jun 2011 21:30:21 +0200
> >>> Reindl Harald <h.reindl at thelounge.net> wrote:
> >>>
> >>>>
> >>>>
> >>>> Am 05.06.2011 21:19, schrieb Lars Schotte:
> >>>>> thats exactly what vmware needs to comprehend.
> >>>>
> >>>> WHAT should VMware do if the guest is measuring traffic which
> >>>> does not exist?
> >>>>
> >>>>> you dont need to tell me that ;-)
> >>>>
> >>>> "so you have to somehow convince vmware not to take snapshots
> >>>> through that virtualized ethernet devices" and your ideas solve
> >>>> this on the pysical layer or about "routing on the host" showing
> >>>> me that you have never worked with a ESXi-Cluster
> >>>>
> >>>> i try to solve a little problem IN THE GUEST and not to
> >>>> change the whole infrastructure because this would change
> >>>> exactly nothing on the "vnstat"-problem
> >>>>
> >>>>> On Sun, 05 Jun 2011 20:55:53 +0200
> >>>>> Reindl Harald <h.reindl at thelounge.net> wrote:
> >>>>>
> >>>>>> sorry to say but you have no idea what about i am speaking
> >>>>>> snapshots are not taken "through that virtualized ethernet
> >>>>>> device"
> >>>>>>
> >>>>>> the guest is freezed for a short time to take a consistent
> >>>>>> state of his drives which are copied on the host, the copy
> >>>>>> has NOTHING to to with the ethernet device in the guest
> >>>>>>
> >>>>>> Am 05.06.2011 20:50, schrieb Lars Schotte:
> >>>>>>> so you have to somehow convince vmware not to take snapshots
> >>>>>>> through that virtualized ethernet devices. maybe an extra
> >>>>>>> ethernet device would help. the first one left for that
> >>>>>>> snapshots fiction and second for networking.
> >>>>>>>
> >>>>>>> On Sun, 05 Jun 2011 20:34:49 +0200
> >>>>>>> Reindl Harald <h.reindl at thelounge.net> wrote:
> >>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> Am 05.06.2011 16:55, schrieb Lars Schotte:
> >>>>>>>>> i definitely wouldnt come to that idea to monitor guests on
> >>>>>>>>> guests w/ vnstat because even if it had worked perfectly,
> >>>>>>>>> its still just a fictional ethernet device. 
> >>>>>>>>
> >>>>>>>> what is there fictional?
> >>>>>>>>
> >>>>>>>> it is a ethernet-device with all features of a
> >>>>>>>> ethernet-device ond the guest does know nothing about
> >>>>>>>> virtualization
> >>>>>>>>
> >>>>>>>>> maybe a vmware monitoring software would be
> >>>>>>>>> more precise or an alternative would be to bind each to a
> >>>>>>>>> virtual network card and do the monitoring on the host
> >>>>>>>>> measuring only the output data and then routing all this
> >>>>>>>>> devices out, thereby using the host as a router, which is of
> >>>>>>>>> course a more complicated setup and i am not even sure if it
> >>>>>>>>> would work, but thats the way i would try to build it up.
> >>>>>>>>
> >>>>>>>> jesus for what reason?
> >>>>>>>>
> >>>>>>>> the host is not a router, the host is a virtual switch
> >>>>>>>> and yes you have monitoring on the vCenter-Server but not
> >>>>>>>> in a console like output and not with exactly numbers
> >>>>>>>>
> >>>>>>>> this are two different worlds and i see no reason why
> >>>>>>>> vnstat would not work on the guest because it does
> >>>>>>>>
> >>>>>>>> only while snapshots are taken / removed there are some
> >>>>>>>> short untrue peaks which would be easaliy could filtered
> >>>>>>>> in the guest-software only by their hughe numbers which are
> >>>>>>>> clearly impossible and the problem is that this does not
> >>>>>>>> happen and so if some measuring says "20 GB in two seconds"
> >>>>>>>> all averages are destroyed
> >>>>>>>>
> >>>>>>>>> On Sun, 05 Jun 2011 16:20:16 +0200
> >>>>>>>>> Reindl Harald <h.reindl at thelounge.net> wrote:
> >>>>>>>>>
> >>>>>>>>>> yes!
> >>>>>>>>>>
> >>>>>>>>>> works perfectly, only after dealing with snapshots there
> >>>>>>>>>> are this horrible peaks on 64bit guests
> >>>>>>>>>>
> >>>>>>>>>> Am 05.06.2011 16:18, schrieb Lars Schotte:
> >>>>>>>>>>> w8, so you are saying that you run vnstat on the guests?
> >>>>>>>>>>>
> >>>>>>>>>>> On Sun, 05 Jun 2011 16:16:44 +0200
> >>>>>>>>>>> Reindl Harald <h.reindl at thelounge.net> wrote:
> >>>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> Am 05.06.2011 16:12, schrieb Lars Schotte:
> >>>>>>>>>>>>> is ifconfig showing this huge numberg at that time as
> >>>>>>>>>>>>> well? 
> >>>>>>>>>>>>
> >>>>>>>>>>>> not currently, but i have seen such outputs in "ifconfig"
> >>>>>>>>>>>> too
> >>>>>>>>>>>>
> >>>>>>>>>>>>> do you have a 64bit OS or 32bit? 
> >>>>>>>>>>>>
> >>>>>>>>>>>> seems only affect x86_64 guests
> >>>>>>>>>>>> good input - the voip-machine is the only 32bit and
> >>>>>>>>>>>> does not show this
> >>>>>>>>>>>>
> >>>>>>>>>>>>> did you try to report it to vmware as well?
> >>>>>>>>>>>>
> >>>>>>>>>>>> they will anser "fedora is not official supported and
> >>>>>>>>>>>> open-vm-tools vom rpmfusion too" on ESXi :-(
> >>>>>>>>>>>>
> >>>>>>>>>>>>> On Sun, 05 Jun 2011 16:06:48 +0200
> >>>>>>>>>>>>> Reindl Harald <h.reindl at thelounge.net> wrote:
> >>>>>>>>>>>>>
> >>>>>>>>>>>>>> has anybody an idea for which package i should file a
> >>>>>>>>>>>>>> bugreport for this? i guess "vnstat" is only the
> >>>>>>>>>>>>>> postman
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> every night from friday to saturday from our
> >>>>>>>>>>>>>> fedora-vmware-guests is made a snapshot by "VMware Data
> >>>>>>>>>>>>>> Recovery" to take a consistent backup and while
> >>>>>>>>>>>>>> deleting the snapshot something triggers horrible
> >>>>>>>>>>>>>> wrong values to "vnstat" which makes monthly summary
> >>>>>>>>>>>>>> useless
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>> see below :-(
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>  eth0  /  daily
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>          day         rx      |     tx      |    total
> >>>>>>>>>>>>>> | avg. rate
> >>>>>>>>>>>>>> ------------------------+-------------+-------------+---------------
> >>>>>>>>>>>>>> 05/07/11   16777216.00 TiB |    5.56 GiB | 16777216.00
> >>>>>>>>>>>>>> TiB | 1668.00 Tbit/s 05/08/11    855.27 MiB |    4.24
> >>>>>>>>>>>>>> GiB | 5.07 GiB | 492.63 kbit/s 05/09/11      2.35 GiB
> >>>>>>>>>>>>>> |   72.14 GiB | 74.49 GiB | 7.23 Mbit/s 05/10/11
> >>>>>>>>>>>>>> 1.47 GiB | 11.41 GiB |   12.88 GiB | 1.25 Mbit/s
> >>>>>>>>>>>>>> 05/11/11      1.11 GiB |    6.19 GiB |    7.30 GiB |
> >>>>>>>>>>>>>> 708.76 kbit/s 05/12/11      1.17 GiB | 5.82 GiB | 6.99
> >>>>>>>>>>>>>> GiB | 678.38 kbit/s 05/13/11      1.12 GiB |    6.50
> >>>>>>>>>>>>>> GiB |    7.62 GiB | 739.88 kbit/s 05/14/11 33554432.00
> >>>>>>>>>>>>>> TiB | 4.10 GiB | 33554432.00 TiB | 3336.00 Tbit/s
> >>>>>>>>>>>>>> 05/15/11 778.85 MiB |    4.45 GiB |    5.21 GiB |
> >>>>>>>>>>>>>> 505.87 kbit/s 05/16/11 1.30 GiB | 7.37 GiB |    8.67
> >>>>>>>>>>>>>> GiB |  842.06 kbit/s 05/17/11 1.38 GiB |    8.18 GiB
> >>>>>>>>>>>>>> |    9.56 GiB | 928.20 kbit/s 05/18/11      1.21 GiB
> >>>>>>>>>>>>>> |    6.83 GiB | 8.04 GiB | 780.32 kbit/s 05/19/11 1.03
> >>>>>>>>>>>>>> GiB |    5.68 GiB | 6.72 GiB | 652.10 kbit/s
> >>>>>>>>>>>>>> 05/20/11      1.11 GiB | 5.18 GiB | 6.29 GiB | 610.67
> >>>>>>>>>>>>>> kbit/s 05/21/11 16777216.00 TiB | 3.97 GiB |
> >>>>>>>>>>>>>> 16777216.00 TiB | 1668.00 Tbit/s 05/22/11 902.15 MiB |
> >>>>>>>>>>>>>> 6.74 GiB |    7.62 GiB | 739.58 kbit/s 05/23/11
> >>>>>>>>>>>>>> 1.28 GiB |   16.56 GiB | 17.84 GiB |    1.73 Mbit/s
> >>>>>>>>>>>>>> 05/24/11      1.60 GiB | 11.42 GiB |   13.02 GiB |
> >>>>>>>>>>>>>> 1.26 Mbit/s 05/25/11 1.47 GiB |    6.65 GiB |    8.12
> >>>>>>>>>>>>>> GiB |  788.78 kbit/s 05/26/11      1.23 GiB | 7.40 GiB
> >>>>>>>>>>>>>> | 8.64 GiB |  838.46 kbit/s 05/27/11      1.43 GiB
> >>>>>>>>>>>>>> |    6.75 GiB |    8.19 GiB | 794.70 kbit/s 05/28/11
> >>>>>>>>>>>>>> 33554432.00 TiB |    5.44 GiB | 33554432.00 TiB |
> >>>>>>>>>>>>>> 3336.00 Tbit/s 05/29/11 855.65 MiB | 4.89 GiB |
> >>>>>>>>>>>>>> 5.72 GiB |  555.47 kbit/s 05/30/11      1.43 GiB |
> >>>>>>>>>>>>>> 9.20 GiB |   10.62 GiB | 1.03 Mbit/s 05/31/11
> >>>>>>>>>>>>>> 1.77 GiB | 9.52 GiB |   11.29 GiB | 1.10 Mbit/s
> >>>>>>>>>>>>>> 06/01/11      1.51 GiB | 9.43 GiB | 10.94 GiB |
> >>>>>>>>>>>>>> 1.06 Mbit/s 06/02/11 906.48 MiB | 5.90 GiB | 6.79 GiB
> >>>>>>>>>>>>>> |  658.85 kbit/s 06/03/11      2.36 GiB | 9.40 GiB |
> >>>>>>>>>>>>>> 11.77 GiB |    1.14 Mbit/s 06/04/11 16777216.00 TiB |
> >>>>>>>>>>>>>> 5.15 GiB | 16777216.00 TiB | 1668.00 Tbit/s 06/05/11
> >>>>>>>>>>>>>> 585.88 MiB | 2.30 GiB | 2.87 GiB | 417.64 kbit/s
> >>>>>>>>>>>>>> ------------------------+-------------+-------------+---------------
> >>>>>>>>>>>>>>
> >>>>>>>>>>>>>>      estimated       877 MiB |    3.44 GiB |    4.30
> >>>>>>>>>>>>>> GiB |
> >>>>>>>>>>>>>>
> 


-- 
Lars Schotte
@ Hana (F14)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 490 bytes
Desc: not available
Url : http://lists.fedoraproject.org/pipermail/devel/attachments/20110606/2a496bf2/attachment.bin 


More information about the devel mailing list