[PATCH] recipes: update virtual_bridge_2_vlans_over_bond test
by Jiri Prochazka
offload setting gro on gso on tso on tx on rx off was missing,
this patch adds it
Signed-off-by: Jiri Prochazka <jprochaz(a)redhat.com>
---
.../phase1/virtual_bridge_2_vlans_over_bond.py | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff --git a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
index e2a9449..59a2dad 100644
--- a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
+++ b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
@@ -32,11 +32,12 @@ g4.sync_resources(modules=["IcmpPing", "Icmp6Ping", "Netperf"])
# TESTS
# ------
-offloads = ["gro", "gso", "tso", "tx"]
-offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on")],
- [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on")],
- [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on")],
- [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off")]]
+offloads = ["gro", "gso", "tso", "tx", "rx"]
+offload_settings = [ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")],
+ [("gro", "off"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "on")],
+ [("gro", "on"), ("gso", "off"), ("tso", "off"), ("tx", "on"), ("rx", "on")],
+ [("gro", "on"), ("gso", "on"), ("tso", "off"), ("tx", "off"), ("rx", "on")],
+ [("gro", "on"), ("gso", "on"), ("tso", "on"), ("tx", "on"), ("rx", "off")]]
ipv = ctl.get_alias("ipv")
netperf_duration = int(ctl.get_alias("netperf_duration"))
--
2.4.11
7 years, 4 months
[PATCH v2 00/26] [RFC] PyRecipes draft
by Jiri Prochazka
Hello everyone,
first working draft implementation is completed. Please, read this cover
letter, check the patches and let me know what you like, what you don't like,
etc. Please, note, that this is not final implementation.
NOTE - patches are not rebased to current HEAD, if you want to try it,
please, reset your branch to commit 2398f93.
What doesn't work
=================
multi_match - didn't have time and resources to check physical machines,
on virt setup it works, but ends with exception due to no res value returned
loopbacks - can't be created
netem support - not implemented yet
Description of new mechanics
============================
Since we need to run TaskAPI methods to get machine requirements from PyRecipe,
methods like provision_machines, prepare_network are run after the python recipe
is executed. Methods add_host and add_interface create machine requirements dict
in ControllerAPI object. When lnst.match() is called, it runs match algorithm
with mreq dict and if match is found, it prepares network and binds Machine
and Interface objects with their HostAPI and InterfaceAPI counterparts.
When everyting is prepared, execution returns to PyRecipe, where task phase
of PyRecipe is executed. Task phase is the same like before, only new method
is breakpoint() which will be useful for debugging, as it allows user to pause
the execution in whatever part of test.
TaskAPI is now used right from lnst module, all callable methods are exported
via __init__.py file from lnst/ dir.
Folder pyrecipes/ contains few working examples you can try.
TODO
====
* multimatch support
* config_only mode should be renamed and polished
* TaskAPI create_ovs() method
* support for loopbacks
* NetEm support
* polishing of the code
Jiri Prochazka (26):
__init__.py: draft for PyRecipes
lnst-ctl: draft for PyRecipes
NetTestController: remove RecipeParser import and related method calls
NetTestController: add run_mode attribute
NetTestController: remove obsolete methods
NetTestController: use mreq dict from PyRecipe
NetTestController: split provision_machines() in two methods
NetTestController: cleanup _preapre_interface()
NetTestController: remove resource sync from _prepare_machine()
NetTestController: add multi_match attribute ti NetTestController
NetTestController: don't call abs_path on _recipe_path in __init__
NetTestController: rework alias handling
NetTestController: add prepare_test_env()
NetTestController: rework match_setup(), config_only_recipe(),
run_recipe(), _run_python_task()
NetTestController: add init_taskapi()
NetTestController: use Task name instead of module
Task: remove deprecated methods
Task: add default param for get_alias()
Task: define TaskAPI methods on global level
Task: add breakpoint()
Task: add add_host() and init_hosts()
Task: add interface handling TaskAPI methods
Task: HostAPI get_id now returns its generated id
Task: add HostAPI methods required by PyRecipes
Task: add match() and minor rework of ControllerAPI
PyRecipes: add example PyRecipes
lnst-ctl | 16 +-
lnst/Controller/NetTestController.py | 398 ++++++++++-------------------------
lnst/Controller/Task.py | 240 +++++++++++----------
lnst/__init__.py | 1 +
pyrecipes/3_vlans.py | 34 +++
pyrecipes/example.py | 33 +++
pyrecipes/ping_flood.py | 48 +++++
7 files changed, 363 insertions(+), 407 deletions(-)
create mode 100644 pyrecipes/3_vlans.py
create mode 100644 pyrecipes/example.py
create mode 100644 pyrecipes/ping_flood.py
--
2.4.11
7 years, 6 months
[PATCH v3 0/7] Graceful kill for timouted processes
by Jan Tluka
This patch set enhances handling of command timeouts. We've noticed that
for example netperf ran from Netperf test module might not be able to make
connection to netperf server. Linux default is to try send 6 SYN packets
before giving up TCP connection. This can take up to 2 minutes before netperf
terminates.
When the timeout for Netperf module happens the current approach is simply to
send SIG_KILL to the process. It works fine but this also make all of command
outputs to be lost so the user can't tell what happened to netperf besides
that it was killed.
To overcome this limitation I've added graceful kill when timeout occurs.
First the slave tries to send SIG_INT to the process and checks if the process
ended for 5 seconds. If the process does not end it is SIG_KILLed.
I tried to test this as much as possible and I also ran the regression tests.
Two of the tests had to be modified due to new reporting of graceful kill.
Besides that all is working fine and even better.
Jan Tluka (7):
NetTestCommand: add pid_exists method to NetTestCommand
NetTestCommand: log interrupt of foreground and background command
separately
NetTestCommand: added graceful kill flag
NetTestSlave: add graceful termination to kill_command
Machine: use graceful kill_command on process timeout
NetTestCommand: add missing join on interrupt
regression-tests: update tests to match graceful termination on
timeout
lnst/Common/NetTestCommand.py | 26 +++++++++++++++++++++++---
lnst/Controller/Machine.py | 9 +++++++--
lnst/Slave/NetTestSlave.py | 22 ++++++++++++++++++++--
regression-tests/tests/24/run.sh | 2 +-
regression-tests/tests/27/run.sh | 4 ++--
5 files changed, 53 insertions(+), 10 deletions(-)
--
2.4.11
7 years, 7 months
[PATCH v2 1/5] Netperf: add netperf debug option
by Jan Tluka
Netperf test module now takes optional parameter 'nperf_debug'. The value
is number and means verbosity level. E.g.
debug=0 => netperf ... (no debug)
debug=1 => netperf -d ...
debug=2 => netperf -dd ...
debug=3 => netperf -ddd ... This is the maximum level of verbosity
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
test_modules/Netperf.py | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/test_modules/Netperf.py b/test_modules/Netperf.py
index 7b9d7f7..ada3c36 100644
--- a/test_modules/Netperf.py
+++ b/test_modules/Netperf.py
@@ -11,7 +11,7 @@ import errno
import re
from lnst.Common.TestsCommon import TestGeneric
from lnst.Common.ShellProcess import ShellProcess
-from lnst.Common.Utils import std_deviation, is_installed
+from lnst.Common.Utils import std_deviation, is_installed, int_it
class Netperf(TestGeneric):
@@ -39,6 +39,7 @@ class Netperf(TestGeneric):
self._cpu_util = self.get_opt("cpu_util")
self._num_parallel = int(self.get_opt("num_parallel", default=1))
self._runs = self.get_opt("runs", default=1)
+ self._debug = int_it(self.get_opt("debug", default=0))
self._threshold = self._parse_threshold(self.get_opt("threshold"))
self._threshold_deviation = self._parse_threshold(
@@ -105,6 +106,9 @@ class Netperf(TestGeneric):
elif self._cpu_util.lower() == "remote":
cmd += " -C"
+ if self._debug > 0:
+ cmd += " -%s" % ('d' * self._debug)
+
if self._netperf_opts is not None:
"""
custom options for netperf
--
2.4.11
7 years, 7 months
[PATCH v2 0/7] Graceful kill for timouted processes
by Jan Tluka
This patch set enhances handling of command timeouts. We've noticed that
for example netperf ran from Netperf test module might not be able to make
connection to netperf server. Linux default is to try send 6 SYN packets
before giving up TCP connection. This can take up to 2 minutes before netperf
terminates.
When the timeout for Netperf module happens the current approach is simply to
send SIG_KILL to the process. It works fine but this also make all of command
outputs to be lost so the user can't tell what happened to netperf besides
that it was killed.
To overcome this limitation I've added graceful kill when timeout occurs.
First the slave tries to send SIG_INT to the process and checks if the process
ended for 5 seconds. If the process does not end it is SIG_KILLed.
I tried to test this as much as possible and I also ran the regression tests.
Two of the tests had to be modified due to new reporting of graceful kill.
Besides that all is working fine and even better.
Jan Tluka (7):
NetTestCommand: add pid_exists method to NetTestCommand
NetTestCommand: log interrupt of foreground and background command
separately
NetTestCommand: added graceful kill flag
NetTestSlave: add graceful termination to kill_command
Machine: use graceful kill_command on process timeout
NetTestCommand: add missing join on interrupt
regression-tests: update tests to match graceful termination on
timeout
lnst/Common/NetTestCommand.py | 26 +++++++++++++++++++++++---
lnst/Controller/Machine.py | 9 +++++++--
lnst/Slave/NetTestSlave.py | 22 ++++++++++++++++++++--
regression-tests/tests/24/run.sh | 2 +-
regression-tests/tests/27/run.sh | 4 ++--
5 files changed, 53 insertions(+), 10 deletions(-)
--
2.4.11
7 years, 7 months
[PATCH 0/7] Graceful kill for timouted processes
by Jan Tluka
This patch set enhances handling of command timeouts. We've noticed that
for example netperf ran from Netperf test module might not be able to make
connection to netperf server. Linux default is to try send 6 SYN packets
before giving up TCP connection. This can take up to 2 minutes before netperf
terminates.
When the timeout for Netperf module happens the current approach is simply to
send SIG_KILL to the process. It works fine but this also make all of command
outputs to be lost so the user can't tell what happened to netperf besides
that it was killed.
To overcome this limitation I've added graceful kill when timeout occurs.
First the slave tries to send SIG_INT to the process and checks if the process
ended for 5 seconds. If the process does not end it is SIG_KILLed.
I tried to test this as much as possible and I also ran the regression tests.
Two of the tests had to be modified due to new reporting of graceful kill.
Besides that all is working fine and even better.
Jan Tluka (7):
NetTestCommand: add pid_exists method to NetTestCommand
NetTestCommand: log interrupt of foreground and background command
separately
NetTestCommand: added graceful kill flag
NetTestSlave: add graceful termination to kill_command
Machine: use graceful kill_command on process timeout
NetTestCommand: add missing join on interrupt
regression-tests: update tests to match graceful termination on
timeout
lnst/Common/NetTestCommand.py | 26 +++++++++++++++++++++++---
lnst/Controller/Machine.py | 9 +++++++--
lnst/Slave/NetTestSlave.py | 22 ++++++++++++++++++++--
regression-tests/tests/24/run.sh | 2 +-
regression-tests/tests/27/run.sh | 4 ++--
5 files changed, 53 insertions(+), 10 deletions(-)
--
2.4.11
7 years, 7 months
[PATCH 0/5] [RFC] PyRecipes draft
by Jiri Prochazka
Hello everyone,
first working draft implementation is completed. Please, read this cover
letter, check the patches and let me know what you like, what you don't like,
etc. Please, note, that this is not final implementation.
NOTE - patches are not rebased to current HEAD, if you want to try it,
please, reset your branch to commit 2398f93.
What doesn't work
=================
multi_match - didn't have time and resources to check physical machines,
on virt setup it works, but ends with exception due to no res value returned
loopbacks - can't be created
netem support - not implemented yet
Description of new mechanics
============================
Since we need to run TaskAPI methods to get machine requirements from PyRecipe,
methods like provision_machines, prepare_network are run after the python recipe
is executed. Methods add_host and add_interface create machine requirements dict
in ControllerAPI object. When lnst.match() is called, it runs match algorithm
with mreq dict and if match is found, it prepares network and binds Machine
and Interface objects with their HostAPI and InterfaceAPI counterparts.
When everyting is prepared, execution returns to PyRecipe, where task phase
of PyRecipe is executed. Task phase is the same like before, only new method
is breakpoint() which will be useful for debugging, as it allows user to pause
the execution in whatever part of test.
TaskAPI is now used right from lnst module, all callable methods are exported
via __init__.py file from lnst/ dir.
Folder pyrecipes/ contains few working examples you can try.
TODO
====
* multimatch support
* config_only mode should be renamed and polished
* TaskAPI create_ovs() method
* support for loopbacks
* NetEm support
* polishing of the code
Jiri Prochazka (5):
__init__.py: draft for PyRecipes
lnst-ctl: draft for PyRecipes
NetTestController: draft for PyRecipes
Task: draft for PyRecipes
PyRecipes: add example PyRecipes
lnst-ctl | 16 +-
lnst/Controller/NetTestController.py | 396 ++++++++++-------------------------
lnst/Controller/Task.py | 240 +++++++++++----------
lnst/__init__.py | 1 +
pyrecipes/3_vlans.py | 34 +++
pyrecipes/example.py | 33 +++
pyrecipes/ping_flood.py | 48 +++++
7 files changed, 362 insertions(+), 406 deletions(-)
create mode 100644 pyrecipes/3_vlans.py
create mode 100644 pyrecipes/example.py
create mode 100644 pyrecipes/ping_flood.py
--
2.4.11
7 years, 7 months
[PATCH 1/5] Netperf: add netperf debug option
by Jan Tluka
Netperf test module now takes optional parameter 'nperf_debug'. The value
is number and means verbosity level. E.g.
debug=0 => netperf ... (no debug)
debug=1 => netperf -d ...
debug=2 => netperf -dd ...
debug=3 => netperf -ddd ... This is the maximum level of verbosity
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
test_modules/Netperf.py | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/test_modules/Netperf.py b/test_modules/Netperf.py
index 7b9d7f7..ada3c36 100644
--- a/test_modules/Netperf.py
+++ b/test_modules/Netperf.py
@@ -11,7 +11,7 @@ import errno
import re
from lnst.Common.TestsCommon import TestGeneric
from lnst.Common.ShellProcess import ShellProcess
-from lnst.Common.Utils import std_deviation, is_installed
+from lnst.Common.Utils import std_deviation, is_installed, int_it
class Netperf(TestGeneric):
@@ -39,6 +39,7 @@ class Netperf(TestGeneric):
self._cpu_util = self.get_opt("cpu_util")
self._num_parallel = int(self.get_opt("num_parallel", default=1))
self._runs = self.get_opt("runs", default=1)
+ self._debug = int_it(self.get_opt("debug", default=0))
self._threshold = self._parse_threshold(self.get_opt("threshold"))
self._threshold_deviation = self._parse_threshold(
@@ -105,6 +106,9 @@ class Netperf(TestGeneric):
elif self._cpu_util.lower() == "remote":
cmd += " -C"
+ if self._debug > 0:
+ cmd += " -%s" % ('d' * self._debug)
+
if self._netperf_opts is not None:
"""
custom options for netperf
--
2.4.11
7 years, 7 months
[PATCH] recipes: fix resetting of oflloads in
virtual_bridge_2_vlans_over_bond
by Jan Tluka
At the end of the test we turn on all of the tested offloads. There was
a typo which reset only the last offload in the set.
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
index 6b79dff..e2a9449 100644
--- a/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
+++ b/recipes/regression_tests/phase1/virtual_bridge_2_vlans_over_bond.py
@@ -378,7 +378,7 @@ for setting in offload_settings:
#reset offload states
dev_features = ""
for offload in offloads:
- dev_features = " %s %s" % (offload, "on")
+ dev_features += " %s %s" % (offload, "on")
h1.run("ethtool -K %s %s" % (h1_nic1.get_devname(), dev_features))
h1.run("ethtool -K %s %s" % (h1_nic2.get_devname(), dev_features))
h2.run("ethtool -K %s %s" % (h2_nic1.get_devname(), dev_features))
--
2.4.11
7 years, 7 months
[PATCH] Netperf: fix confidence parsing when throughput measurement
is 0
by Jan Tluka
Netperf does not properly handle confidence calculation when measured
throughput is 0. E.g. when firewall blocks the data stream. When this
happens netperf reports -nan for confidence level instead of some
reasonable number. This patch adds exception check while parsing confidence
levels.
Fixes #170.
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
test_modules/Netperf.py | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/test_modules/Netperf.py b/test_modules/Netperf.py
index 7b9d7f7..5f00d54 100644
--- a/test_modules/Netperf.py
+++ b/test_modules/Netperf.py
@@ -223,7 +223,14 @@ class Netperf(TestGeneric):
def _parse_confidence_omni(self, output):
pattern_throughput_confid = "THROUGHPUT_CONFID=([-]?\d+\.\d+)"
pattern_confidence_level = "CONFIDENCE_LEVEL=(\d+)"
- throughput_confid = float(re.search(pattern_throughput_confid, output).group(1))
+ try:
+ throughput_confid = float(re.search(pattern_throughput_confid, output).group(1))
+ except AttributeError:
+ # when netperf measures throughput=0 it tries to divide by 0 and
+ # prints THROUGHPUT_CONFID=-nan
+ logging.warning("Could not parse THROUGHPUT_CONFID")
+ return (0, 0.0)
+
confidence_level = int(re.search(pattern_confidence_level, output).group(1))
real_confidence = (confidence_level, throughput_confid/2)
--
2.4.11
7 years, 7 months