[PATCH] Python2 to Python3 conversion
by Jozef Urbanovsky
Main changes:
================================================
- Print function - PEP 3105[1]
- Design of dictionaries, their iteration and respective built-in
functions - PEP 469[2], PEP 3106[3]
- Division operator - PEP 238[4]
- Interrupts of system calls - PEP 475[5]
- Socket.accept and interrupts - Docs Python3 Socket[6]
- Universal newlines - PEP 278[7], PEP 3116[8], Docs Python3
Subprocess[9]
- Binary data and strings - Docs Python3 porting[10]
- Shebangs modified to environmental variable of python3
- Fixed library imports to reflect their respective changes in
transition to python3
- multiprocessing
- xmlrpc
- urllib
- thread
- pickle
InterfaceManager.py
================================================
- Byte string and unicode/ascii string conflict, fixed by mostly
using a byte string through the project
NetTestSlave.py
================================================
- signum 2 (SIGINT) - modified behavior since python 3.5
- Due to changes in interrupting system call, signal receiving and
handling required changes to main slave loop
- Structure of raising, catching and handling interrupt was changed,
to reflect behavior of python 3.5 and higher
- Method run is no longer ended by parameter "finished", but rather
exception breaks the endless loop
SecureSocket.py
================================================
- Byte string and unicode/ascii string conflict, fixed by using bytes
internally and sending messages encoded to ascii
- Fixed multiple issues regarding changes to standard python functions,
such as encoding to hex, ordinal value of characters, length of
different types
NetTestCommand.py
================================================
- name_start_char is using unicode string, therefore each unicode has
to be escaped twice
ExecCmd.py
================================================
- Popen universal newlines
- Abstraction to be able to read newlines regardless of system used
and its basic encoding
XmlParser.py
================================================
- etree.tostring function behavior changed to generate byte string
by default
- Unicode encoding is required, as specified by parameter
NmConfigDevice.py
================================================
- Conversion of bytes array to integer value was changed, ord no longer
returns plain integer value
- Need to convert bytes array to integer - int.from_bytes
- Reverse conversion does not need to account for bytes array,
therefore it is redundant to convert IP twice
ShellProcess.py
================================================
- Decoding file stream, to use unicode string instead of byte string,
as built-in function read returns byte string by default in python3
Netperf.py
================================================
- Changed evaluation of results to account for changes made to base
types in python3
- Dictionary comparision with integer by their respective lengths
and values
Machine.py
================================================
- List comprehension functionality in python3 was changed
- New function to parse queried addresses from slaves - modified
code from next branch
- get_ip_addr returns a dictionary or IP address objects - fixed to
return only IP address portion by getting addr attribute from
dictionary
================================================
[1] - PEP 3105
https://www.python.org/dev/peps/pep-3105/
[2] - PEP 469
https://www.python.org/dev/peps/pep-0469/
[3] - PEP 3106
https://www.python.org/dev/peps/pep-3106/
[4] - PEP 238
https://legacy.python.org/dev/peps/pep-0238/
[5] - PEP 475
https://www.python.org/dev/peps/pep-0475/
[6] - Docs Python3 Socket
https://docs.python.org/3/library/socket.html#socket.socket.accept
[7] - PEP 278
https://www.python.org/dev/peps/pep-0278/
[8] - PEP 3116
https://www.python.org/dev/peps/pep-3116/
[9] - Docs Python3 Subprocess
https://docs.python.org/3/library/subprocess.html
[10] - Docs Python3 Porting
https://docs.python.org/3/howto/pyporting.html#text-versus-binary-data
Signed-off-by: Jozef Urbanovsky <jurbanov(a)redhat.com>
---
lnst-ctl | 88 +++++++++++-----------
lnst-pool-wizard | 6 +-
lnst-slave | 20 ++---
lnst/Common/Colours.py | 12 +--
lnst/Common/Config.py | 8 +-
lnst/Common/ConnectionHandler.py | 2 +-
lnst/Common/ExecCmd.py | 5 +-
lnst/Common/LoggingHandler.py | 4 +-
lnst/Common/Logs.py | 6 +-
lnst/Common/NetTestCommand.py | 18 ++---
lnst/Common/NetUtils.py | 2 +-
lnst/Common/Path.py | 5 +-
lnst/Common/ProcessManager.py | 14 ++--
lnst/Common/ResourceCache.py | 4 +-
lnst/Common/SecureSocket.py | 71 +++++++++--------
lnst/Common/ShellProcess.py | 2 +-
lnst/Common/Utils.py | 4 +-
lnst/Controller/Machine.py | 30 +++++---
lnst/Controller/NetTestController.py | 36 ++++-----
lnst/Controller/NetTestResultSerializer.py | 8 +-
lnst/Controller/RecipeParser.py | 16 ++--
lnst/Controller/SlavePool.py | 48 ++++++------
lnst/Controller/Task.py | 16 ++--
lnst/Controller/Wizard.py | 36 ++++-----
lnst/Controller/XmlParser.py | 4 +-
lnst/Controller/XmlProcessing.py | 12 +--
lnst/Controller/XmlTemplates.py | 6 +-
lnst/Slave/InterfaceManager.py | 40 +++++-----
lnst/Slave/NetConfigDevice.py | 10 +--
lnst/Slave/NetTestSlave.py | 66 ++++++++--------
lnst/Slave/NmConfigDevice.py | 5 +-
setup.py | 4 +-
test_modules/Multicast.py | 2 +-
test_modules/Netperf.py | 2 +-
test_modules/TRexClient.py | 4 +-
35 files changed, 317 insertions(+), 299 deletions(-)
diff --git a/lnst-ctl b/lnst-ctl
index d61353d..ed13760 100755
--- a/lnst-ctl
+++ b/lnst-ctl
@@ -1,4 +1,4 @@
-#! /usr/bin/env python2
+#! /usr/bin/env python3
"""
Net test controller
@@ -34,41 +34,41 @@ def usage(retval=0):
"""
Print usage of this app
"""
- print "Usage: %s [OPTIONS...] ACTION [RECIPES...]" % sys.argv[0]
- print ""
- print "ACTION = [ run | config_only | deconfigure | match_setup | " \
- "list_pools]"
- print ""
- print "OPTIONS"
- print " -A, --override-alias name=value define top-level alias that " \
- "will override any other definitions in the recipe"
- print " -a, --define-alias name=value define top-level alias"
- print " -c, --config=FILE load additional config file"
- print " -C, --config-override=FILE reset config defaults and load " \
- "the following config file"
- print " -d, --debug emit debugging messages"
- print " --dump-config dumps the join of all loaded " \
- "configuration files on stdout and exits"
- print " -v, --verbose verbose version of list_pools " \
- "command"
- print " -h, --help print this message"
- print " -m, --no-colours disable coloured terminal output"
- print " -o, --disable-pool-checks don't check the availability of " \
- "machines in the pool"
- print " -p, --packet-capture capture and log all ongoing " \
- "network communication during the test"
- print " --pools=NAME[,...] restricts which pools to use "\
- "for matching, value can be a comma separated list of values or"
- print " --pools=PATH a single path to a pool directory"
- print " -r, --reduce-sync reduces resource synchronization "\
- "for python tasks, see documentation"
- print " -s, --xslt-url=URL URL to a XSLT document that will "\
+ print("Usage: %s [OPTIONS...] ACTION [RECIPES...]" % sys.argv[0])
+ print("")
+ print("ACTION = [ run | config_only | deconfigure | match_setup | " \
+ "list_pools]")
+ print("")
+ print("OPTIONS")
+ print(" -A, --override-alias name=value define top-level alias that " \
+ "will override any other definitions in the recipe")
+ print(" -a, --define-alias name=value define top-level alias")
+ print(" -c, --config=FILE load additional config file")
+ print(" -C, --config-override=FILE reset config defaults and load " \
+ "the following config file")
+ print(" -d, --debug emit debugging messages")
+ print(" --dump-config dumps the join of all loaded " \
+ "configuration files on stdout and exits")
+ print(" -v, --verbose verbose version of list_pools " \
+ "command")
+ print(" -h, --help print this message")
+ print(" -m, --no-colours disable coloured terminal output")
+ print(" -o, --disable-pool-checks don't check the availability of " \
+ "machines in the pool")
+ print(" -p, --packet-capture capture and log all ongoing " \
+ "network communication during the test")
+ print(" --pools=NAME[,...] restricts which pools to use "\
+ "for matching, value can be a comma separated list of values or")
+ print(" --pools=PATH a single path to a pool directory")
+ print(" -r, --reduce-sync reduces resource synchronization "\
+ "for python tasks, see documentation")
+ print(" -s, --xslt-url=URL URL to a XSLT document that will "\
"be used when transforming the xml result file, only useful "\
- "when -t is used as well"
- print " -t, --html=FILE generate a formatted result html"
- print " -u, --multi-match run each recipe with every "\
- "pool match possible"
- print " -x, --result=FILE file to write xml_result"
+ "when -t is used as well")
+ print(" -t, --html=FILE generate a formatted result html")
+ print(" -u, --multi-match run each recipe with every "\
+ "pool match possible")
+ print(" -x, --result=FILE file to write xml_result")
sys.exit(retval)
def list_pools(restrict_pools, verbose):
@@ -90,12 +90,12 @@ def list_pools(restrict_pools, verbose):
out = ""
# iterate over all pools
sp_pools = sp.get_pools()
- for pool_name, pool_content in sp_pools.iteritems():
+ for pool_name, pool_content in sp_pools.items():
out += "Pool: %s (%s)\n" % (pool_name, pools[pool_name])
# verbose output
if verbose:
# iterate over all slave machine cfgs
- for filename, pool in pool_content.iteritems():
+ for filename, pool in pool_content.items():
# print in human-readable format
out += 3*" " + filename + ".xml\n"
out += 5*" " + "params:\n"
@@ -113,9 +113,9 @@ def list_pools(restrict_pools, verbose):
out += "\n"
# print wihout newlines on the end of string
if verbose:
- print out[:-2]
+ print(out[:-2])
else:
- print out[:-1]
+ print(out[:-1])
def store_alias(alias_def, aliases_dict):
@@ -263,7 +263,7 @@ def main():
]
)
except getopt.GetoptError as err:
- print str(err)
+ print(str(err))
usage(RETVAL_ERR)
lnst_config.controller_init()
@@ -308,16 +308,16 @@ def main():
usage(RETVAL_PASS)
elif opt in ("-c", "--config"):
if not os.path.isfile(arg):
- print "File '%s' doesn't exist!" % arg
+ print("File '%s' doesn't exist!" % arg)
usage(RETVAL_ERR)
else:
lnst_config.load_config(arg)
elif opt in ("-C", "--config-override"):
if not os.path.isfile(arg):
- print "File '%s' doesn't exist!" % arg
+ print("File '%s' doesn't exist!" % arg)
usage(RETVAL_ERR)
else:
- print >> sys.stderr, "Reloading config defaults!"
+ print("Reloading config defaults!", file=sys.stderr)
lnst_config.controller_init()
lnst_config.load_config(arg)
elif opt in ("-x", "--result"):
@@ -351,7 +351,7 @@ def main():
lnst_config.set_option("environment", "xslt_url", xslt_url)
if dump_config:
- print lnst_config.dump_config()
+ print(lnst_config.dump_config())
return RETVAL_PASS
if coloured_output:
diff --git a/lnst-pool-wizard b/lnst-pool-wizard
index ea2849b..f10635e 100755
--- a/lnst-pool-wizard
+++ b/lnst-pool-wizard
@@ -1,4 +1,4 @@
-#! /usr/bin/env python2
+#! /usr/bin/env python3
"""
Machine pool wizard
@@ -19,7 +19,7 @@ RETVAL_ERR = 1
def help(retval=0):
- print "Usage:\n"\
+ print("Usage:\n"\
" lnst-pool-wizard [mode] [hostname[:port]]\n"\
"\n"\
"Modes:\n"\
@@ -34,7 +34,7 @@ def help(retval=0):
" lnst-pool-wizard hostname1:1234 hostname2\n"\
" lnst-pool-wizard --noninteractive 192.168.122.2\n"\
" lnst-pool-wizard -n 192.168.122.2:8888 192.168.122.4\n"\
- " lnst-pool-wizard -p \".pool/\" -n 192.168.1.1:8877 192.168.122.4"
+ " lnst-pool-wizard -p \".pool/\" -n 192.168.1.1:8877 192.168.122.4")
sys.exit(retval)
diff --git a/lnst-slave b/lnst-slave
index ebb4cd3..0f77873 100755
--- a/lnst-slave
+++ b/lnst-slave
@@ -1,4 +1,4 @@
-#! /usr/bin/env python2
+#! /usr/bin/env python3
"""
Net test slave
@@ -25,14 +25,14 @@ def usage():
"""
Print usage of this app
"""
- print "Usage: %s [OPTION...]" % sys.argv[0]
- print ""
- print " -d, --debug emit debugging messages"
- print " -h, --help print this message"
- print " -e, --daemonize go to background after init"
- print " -i, --pidfile file to write daemonized process pid"
- print " -m, --no-colours disable coloured terminal output"
- print " -p, --port xmlrpc port to listen on"
+ print("Usage: %s [OPTION...]" % sys.argv[0])
+ print("")
+ print(" -d, --debug emit debugging messages")
+ print(" -h, --help print this message")
+ print(" -e, --daemonize go to background after init")
+ print(" -i, --pidfile file to write daemonized process pid")
+ print(" -m, --no-colours disable coloured terminal output")
+ print(" -p, --port xmlrpc port to listen on")
sys.exit()
def main():
@@ -46,7 +46,7 @@ def main():
["debug", "help", "daemonize", "pidfile=", "port=", "no-colours"]
)[0]
except getopt.GetoptError as err:
- print str(err)
+ print(str(err))
usage()
sys.exit()
diff --git a/lnst/Common/Colours.py b/lnst/Common/Colours.py
index e57794e..a00e2d2 100644
--- a/lnst/Common/Colours.py
+++ b/lnst/Common/Colours.py
@@ -52,7 +52,7 @@ def name_to_fg_colour(name):
""" Convert name to foreground colour code.
Returns None if the colour name isn't supported. """
- if not COLOURS.has_key(name):
+ if name not in COLOURS:
return None
return COLOURS[name]
@@ -61,7 +61,7 @@ def name_to_bg_colour(name):
""" Convert name to background color code.
Returns None if the colour name isn't supported. """
- if not COLOURS.has_key(name):
+ if name not in COLOURS:
return None
return COLOURS[name] + 10
@@ -136,7 +136,7 @@ def decorate_string(string, fg_colour=None, bg_colour=None, bold=False):
raise Exception(msg)
else:
# Standard definition
- if colour_def in COLOURS.keys():
+ if colour_def in list(COLOURS.keys()):
if fg:
colour = name_to_fg_colour(colour_def)
else:
@@ -165,7 +165,7 @@ def strip_colours(text):
def get_preset_conf(preset_name):
preset = PRESETS[preset_name]
- return map(lambda s: "default" if s == None else str(s), preset)
+ return ["default" if s == None else str(s) for s in preset]
def load_presets_from_config(lnst_config):
for preset_name in PRESETS:
@@ -178,12 +178,12 @@ def load_presets_from_config(lnst_config):
if fg == "default":
fg = None
- elif not re.match(extended_re, fg) and fg not in COLOURS.keys():
+ elif not re.match(extended_re, fg) and fg not in list(COLOURS.keys()):
raise Exception("Colour '%s' not supported" % fg)
if bg == "default":
bg = None
- elif not re.match(extended_re, bg) and bg not in COLOURS.keys():
+ elif not re.match(extended_re, bg) and bg not in list(COLOURS.keys()):
raise Exception("Colour '%s' not supported" % bg)
PRESETS[preset_name] = [fg, bg, bool_it(bf)]
diff --git a/lnst/Common/Config.py b/lnst/Common/Config.py
index d47f3e2..3aa2820 100644
--- a/lnst/Common/Config.py
+++ b/lnst/Common/Config.py
@@ -221,7 +221,7 @@ class Config():
raise ConfigError(msg)
res = {}
- for opt_name, opt in self._options[section].items():
+ for opt_name, opt in list(self._options[section].items()):
res[opt_name] = opt["value"]
return res
@@ -283,7 +283,7 @@ class Config():
'''Parse and load the config file'''
exp_path = os.path.expanduser(path)
abs_path = os.path.abspath(exp_path)
- print >> sys.stderr, "Loading config file '%s'" % abs_path
+ print("Loading config file '%s'" % abs_path, file=sys.stderr)
sections = self._parse_file(abs_path)
self.handleSections(sections, abs_path)
@@ -338,7 +338,7 @@ class Config():
def get_pools(self):
pools = {}
- for pool_name, pool in self._options["pools"].items():
+ for pool_name, pool in list(self._options["pools"].items()):
pools[pool_name] = pool["value"]
return pools
@@ -349,7 +349,7 @@ class Config():
return None
def _find_option_by_name(self, section, opt_name):
- for option in section.itervalues():
+ for option in section.values():
if option["name"] == opt_name:
return option
return None
diff --git a/lnst/Common/ConnectionHandler.py b/lnst/Common/ConnectionHandler.py
index 5d3170a..3856917 100644
--- a/lnst/Common/ConnectionHandler.py
+++ b/lnst/Common/ConnectionHandler.py
@@ -13,7 +13,7 @@ olichtne(a)redhat.com (Ondrej Lichtner)
import select
import socket
-from _multiprocessing import Connection
+from multiprocessing.connection import Connection
from pyroute2 import IPRSocket
from lnst.Common.SecureSocket import SecureSocket, SecSocketException
diff --git a/lnst/Common/ExecCmd.py b/lnst/Common/ExecCmd.py
index 24715ae..5327f11 100644
--- a/lnst/Common/ExecCmd.py
+++ b/lnst/Common/ExecCmd.py
@@ -12,6 +12,7 @@ jpirko(a)redhat.com (Jiri Pirko)
import logging
import subprocess
+import sys
class ExecCmdFail(Exception):
_cmd = None
@@ -55,9 +56,11 @@ def exec_cmd(cmd, die_on_err=True, log_outputs=True, report_stderr=False, json=F
cmd = cmd.rstrip(" ")
logging.debug("Executing: \"%s\"" % cmd)
subp = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,
- stderr=subprocess.PIPE, close_fds=True)
+ stderr=subprocess.PIPE, close_fds=True, universal_newlines=True)
(data_stdout, data_stderr) = subp.communicate()
+ #data_stdout = data_stdout.decode(sys.getfilesystemencoding())
+ #data_stderr = data_stderr.decode(sys.getfilesystemencoding())
'''
When we should not die on error, do not print anything and let
the caller to decide what to do.
diff --git a/lnst/Common/LoggingHandler.py b/lnst/Common/LoggingHandler.py
index a236922..6acc9bc 100644
--- a/lnst/Common/LoggingHandler.py
+++ b/lnst/Common/LoggingHandler.py
@@ -16,7 +16,7 @@ olichtne(a)redhat.com (Ondrej Lichtner)
import pickle
import logging
-import xmlrpclib
+import xmlrpc.client
from lnst.Common.ConnectionHandler import send_data
class LogBuffer(logging.Handler):
@@ -39,7 +39,7 @@ class LogBuffer(logging.Handler):
d['args'] = None
d['exc_info'] = None
s = pickle.dumps(d, 1)
- return xmlrpclib.Binary(s)
+ return xmlrpc.client.Binary(s)
def add_buffer(self, buf):
for i in buf:
diff --git a/lnst/Common/Logs.py b/lnst/Common/Logs.py
index 36e0580..77c0f14 100644
--- a/lnst/Common/Logs.py
+++ b/lnst/Common/Logs.py
@@ -106,7 +106,7 @@ class LoggingCtl:
logger = logging.getLogger('')
for i in list(logger.handlers):
logger.removeHandler(i)
- for key, logger in logging.Logger.manager.loggerDict.iteritems():
+ for key, logger in list(logging.Logger.manager.loggerDict.items()):
if type(logger) != type(logging.Logger):
continue
for i in list(logger.handlers):
@@ -218,7 +218,7 @@ class LoggingCtl:
logger = logging.getLogger()
logger.addHandler(self.transmit_handler)
- for k in self.slaves.keys():
+ for k in list(self.slaves.keys()):
self.remove_slave(k)
def cancel_connection(self):
@@ -230,7 +230,7 @@ class LoggingCtl:
def disable_logging(self):
self.cancel_connection()
- for s in self.slaves.keys():
+ for s in list(self.slaves.keys()):
self.remove_slave(s)
self.unset_recipe()
diff --git a/lnst/Common/NetTestCommand.py b/lnst/Common/NetTestCommand.py
index b894363..79b5893 100644
--- a/lnst/Common/NetTestCommand.py
+++ b/lnst/Common/NetTestCommand.py
@@ -325,10 +325,10 @@ class NetTestCommandGeneric(object):
formatted_data = ""
if res_data:
max_key_len = 0
- for key in res_data.keys():
+ for key in list(res_data.keys()):
if len(key) > max_key_len:
max_key_len = len(key)
- for key, value in res_data.iteritems():
+ for key, value in list(res_data.items()):
if type(value) == dict:
formatted_data += level*4*" " + str(key) + ":\n"
formatted_data += self.format_res_data(value, level+1)
@@ -347,13 +347,13 @@ class NetTestCommandGeneric(object):
return formatted_data
def _check_res_data(self, res_data):
- name_start_char = u":A-Z_a-z\xC0-\xD6\xD8-\xF6\xF8-\u02FF"\
- u"\u0370-\u037D\u037F-\u1FFF\u200C-\u200D"\
- u"\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF"\
- u"\uF900-\uFDCF\uFDF0-\uFFFD\U00010000-\U000EFFFF"
- name_char = name_start_char + u"\-\.0-9\xB7\u0300-\u036F\u203F-\u2040"
- name = u"[%s]([%s])*$" % (name_start_char, name_char)
- char_data = u"[^<&]*"
+ name_start_char = ":A-Z_a-z\xC0-\xD6\xD8-\xF6\xF8-\\u02FF"\
+ "\\u0370-\\u037D\\u037F-\\u1FFF\\u200C-\\u200D"\
+ "\\u2070-\\u218F\\u2C00-\\u2FEF\\u3001-\\uD7FF"\
+ "\\uF900-\\uFDCF\\uFDF0-\\uFFFD\\U00010000-\\U000EFFFF"
+ name_char = name_start_char + "\-\.0-9\xB7\\u0300-\\u036F\\u203F-\\u2040"
+ name = "[%s]([%s])*$" % (name_start_char, name_char)
+ char_data = "[^<&]*"
if isinstance(res_data, dict):
for key in res_data:
if not re.match(name, key, re.UNICODE):
diff --git a/lnst/Common/NetUtils.py b/lnst/Common/NetUtils.py
index 9fd4732..42f377b 100644
--- a/lnst/Common/NetUtils.py
+++ b/lnst/Common/NetUtils.py
@@ -136,7 +136,7 @@ class MacPool(AddressPool):
return bs
def _byte_string_to_addr(self, byte_string):
- return ':'.join(map(lambda x: "%02x" % x, byte_string))
+ return ':'.join(["%02x" % x for x in byte_string])
class IpPool(AddressPool):
diff --git a/lnst/Common/Path.py b/lnst/Common/Path.py
index fe7f709..e61b8eb 100644
--- a/lnst/Common/Path.py
+++ b/lnst/Common/Path.py
@@ -11,8 +11,9 @@ jtluka(a)redhat.com (Jan Tluka)
"""
import os
-from urlparse import urljoin
-from urllib2 import urlopen, HTTPError
+from urllib.parse import urljoin
+from urllib.request import urlopen
+from urllib.error import HTTPError
from tempfile import NamedTemporaryFile
def get_path_class(root, path):
diff --git a/lnst/Common/ProcessManager.py b/lnst/Common/ProcessManager.py
index 1995d15..564b49e 100644
--- a/lnst/Common/ProcessManager.py
+++ b/lnst/Common/ProcessManager.py
@@ -9,18 +9,18 @@ published by the Free Software Foundation; see COPYING for details.
__autor__ = """
jzupka(a)redhat.com (Jiri Zupka)
"""
-import os, signal, thread, logging
+import os, signal, _thread, logging
class ProcessManager:
class SubProcess:
def __init__(self, pid, handler):
self.pid = pid
- self.lock = thread.allocate_lock()
+ self.lock = _thread.allocate_lock()
self.lock.acquire()
self.handler = handler
self.enabled = True
self.status = None
- thread.start_new_thread(self.waitpid, (self.pid, self.lock,
+ _thread.start_new_thread(self.waitpid, (self.pid, self.lock,
self.handler))
def isAlive(self):
@@ -45,12 +45,12 @@ class ProcessManager:
logging.error(''.join(traceback.format_exception(type, value, tb)))
os.kill(os.getpid(), signal.SIGTERM)
else:
- print "Process pid %s exit with exitcode %s" % (pid, status)
+ print("Process pid %s exit with exitcode %s" % (pid, status))
ProcessManager.lock.release()
- thread.exit()
+ _thread.exit()
pids = {}
- lock = thread.allocate_lock()
+ lock = _thread.allocate_lock()
std_waitpid = None
@classmethod
@@ -86,7 +86,7 @@ class ProcessManager:
return pid, status
-lock = thread.allocate_lock()
+lock = _thread.allocate_lock()
lock.acquire()
if os.waitpid != ProcessManager.waitpid:
ProcessManager.std_waitpid = os.waitpid
diff --git a/lnst/Common/ResourceCache.py b/lnst/Common/ResourceCache.py
index 98558a2..092f974 100644
--- a/lnst/Common/ResourceCache.py
+++ b/lnst/Common/ResourceCache.py
@@ -65,7 +65,7 @@ class ResourceCache(object):
header = "# hash " \
"last_used type name path\n"
f.write(header)
- for entry_hash, entry in self._entries.iteritems():
+ for entry_hash, entry in list(self._entries.items()):
values = (entry_hash, entry["last_used"], entry["type"],
entry["name"], entry["path"])
line = "%s %d %s %s %s\n" % values
@@ -137,7 +137,7 @@ class ResourceCache(object):
rm = []
now = time.time()
- for entry_hash, entry in self._entries.iteritems():
+ for entry_hash, entry in list(self._entries.items()):
if entry["last_used"] <= (now - self._expiration_period):
rm.append(entry_hash)
diff --git a/lnst/Common/SecureSocket.py b/lnst/Common/SecureSocket.py
index b856238..349c413 100644
--- a/lnst/Common/SecureSocket.py
+++ b/lnst/Common/SecureSocket.py
@@ -16,7 +16,7 @@ olichtne(a)redhat.com (Ondrej Lichtner)
"""
import os
-import cPickle
+import pickle
import hashlib
import hmac
from lnst.Common.Utils import not_imported
@@ -43,11 +43,11 @@ def bit_length(i):
except AttributeError:
return len(bin(i)) - 2
-DH_GROUP["q"] = (DH_GROUP["p"]-1)/2
-DH_GROUP["q_size"] = bit_length(DH_GROUP["q"])/8
+DH_GROUP["q"] = (DH_GROUP["p"]-1)//2
+DH_GROUP["q_size"] = bit_length(DH_GROUP["q"])//8
if bit_length(DH_GROUP["q"])%8:
DH_GROUP["q_size"] += 1
-DH_GROUP["p_size"] = bit_length(DH_GROUP["p"])/8
+DH_GROUP["p_size"] = bit_length(DH_GROUP["p"])//8
if bit_length(DH_GROUP["p"])%8:
DH_GROUP["p_size"] += 1
@@ -63,11 +63,11 @@ SRP_GROUP = {"p": int("0xAC6BDB41324A9A9BF166DE5E1389582FAF72B6651987EE07FC"
"DE236D525F54759B65E372FCD68EF20FA7111F9E4AFF73", 16),
"g": 2}
-SRP_GROUP["q"] = (SRP_GROUP["p"]-1)/2
-SRP_GROUP["q_size"] = bit_length(SRP_GROUP["q"])/8
+SRP_GROUP["q"] = (SRP_GROUP["p"]-1)//2
+SRP_GROUP["q_size"] = bit_length(SRP_GROUP["q"])//8
if bit_length(SRP_GROUP["q"])%8:
SRP_GROUP["q_size"] += 1
-SRP_GROUP["p_size"] = bit_length(SRP_GROUP["p"])/8
+SRP_GROUP["p_size"] = bit_length(SRP_GROUP["p"])//8
if bit_length(SRP_GROUP["p"])%8:
SRP_GROUP["p_size"] += 1
@@ -150,14 +150,14 @@ class SecureSocket(object):
"seq_num": 0}
def send_msg(self, msg):
- pickled_msg = cPickle.dumps(msg)
+ pickled_msg = pickle.dumps(msg)
return self.send(pickled_msg)
def recv_msg(self):
pickled_msg = self.recv()
- if pickled_msg == "":
+ if pickled_msg == b"":
raise SecSocketException("Disconnected")
- msg = cPickle.loads(pickled_msg)
+ msg = pickle.loads(pickled_msg)
return msg
def _add_mac_sign(self, data):
@@ -165,22 +165,26 @@ class SecureSocket(object):
return data
cryptography_imports()
- msg = str(self._current_write_spec["seq_num"]) + str(len(data)) + data
+ msg = (bytes(str(self._current_write_spec["seq_num"]).encode('ascii'))
+ + bytes(str(len(data)).encode('ascii'))
+ + data)
signature = hmac.new(self._current_write_spec["mac_key"],
msg,
hashlib.sha256)
signed_msg = {"data": data,
"signature": signature.digest()}
- return cPickle.dumps(signed_msg)
+ return pickle.dumps(signed_msg)
def _del_mac_sign(self, signed_data):
if not self._current_read_spec["mac_key"]:
return signed_data
cryptography_imports()
- signed_msg = cPickle.loads(signed_data)
+ signed_msg = pickle.loads(signed_data)
data = signed_msg["data"]
- msg = str(self._current_read_spec["seq_num"]) + str(len(data)) + data
+ msg = (bytes(str(self._current_read_spec["seq_num"]).encode('ascii'))
+ + bytes(str(len(data)).encode('ascii'))
+ + data)
signature = hmac.new(self._current_read_spec["mac_key"],
msg,
@@ -195,9 +199,9 @@ class SecureSocket(object):
return data
cryptography_imports()
- block_size = algorithms.AES.block_size/8
+ block_size = algorithms.AES.block_size//8
pad_length = block_size - (len(data) % block_size)
- pad_char = ("%02x" % pad_length).decode("hex")
+ pad_char = bytes([pad_length])
padding = pad_length * pad_char
padded_data = data+padding
@@ -208,9 +212,9 @@ class SecureSocket(object):
return data
cryptography_imports()
- pad_length = int(data[-1].encode("hex"), 16)
+ pad_length = ord(data[-1])
for char in data[-pad_length]:
- if int(char.encode("hex"), 16) != pad_length:
+ if ord(char) != pad_length:
return None
return data[:-pad_length]
@@ -220,7 +224,7 @@ class SecureSocket(object):
return data
cryptography_imports()
- iv = os.urandom(algorithms.AES.block_size/8)
+ iv = os.urandom(algorithms.AES.block_size//8)
mode = modes.CBC(iv)
key = self._current_write_spec["enc_key"]
cipher = Cipher(algorithms.AES(key), mode, default_backend())
@@ -231,14 +235,14 @@ class SecureSocket(object):
encrypted_msg = {"iv": iv,
"enc_data": encrypted_data}
- return cPickle.dumps(encrypted_msg)
+ return pickle.dumps(encrypted_msg)
def _del_encrypt(self, data):
if not self._current_read_spec["enc_key"]:
return data
cryptography_imports()
- encrypted_msg = cPickle.loads(data)
+ encrypted_msg = pickle.loads(data)
encrypted_data = encrypted_msg["enc_data"]
iv = encrypted_msg["iv"]
@@ -276,27 +280,28 @@ class SecureSocket(object):
def send(self, data):
protected_data = self._protect_data(data)
- transmit_data = str(len(protected_data)) + " " + protected_data
+ transmit_data = bytes(str(len(protected_data)).encode('ascii')) + b" " + protected_data
return self._socket.sendall(transmit_data)
def recv(self):
- length = ""
+ length = b""
while True:
c = self._socket.recv(1)
- if c == ' ':
- length = int(length)
+
+ if c == b' ':
+ length = int(length.decode('ascii'))
break
- elif c == "":
- return ""
+ elif c == b"":
+ return b""
else:
length += c
- data = ""
+ data = b""
while len(data) < length:
c = self._socket.recv(length - len(data))
- if c == "":
- return ""
+ if c == b"":
+ return b""
else:
data += c
@@ -307,7 +312,7 @@ class SecureSocket(object):
def _handle_internal(self, orig_msg):
try:
- msg = cPickle.loads(orig_msg)
+ msg = pickle.loads(orig_msg)
except:
return orig_msg
if "type" in msg and msg["type"] == "change_cipher_spec":
@@ -348,7 +353,7 @@ class SecureSocket(object):
def p_SHA256(self, secret, seed, length):
prev_a = seed
- result = ""
+ result = b""
while len(result) < length:
a = hmac.new(secret, msg=prev_a, digestmod=hashlib.sha256)
prev_a = a.digest()
@@ -372,7 +377,7 @@ class SecureSocket(object):
raise SecSocketException("Socket without a role!")
cryptography_imports()
- aes_keysize = max(algorithms.AES.key_sizes)/8
+ aes_keysize = max(algorithms.AES.key_sizes)//8
mac_keysize = hashlib.sha256().block_size
prf_seq = self.PRF(self._master_secret,
diff --git a/lnst/Common/ShellProcess.py b/lnst/Common/ShellProcess.py
index a4928ed..45dedff 100644
--- a/lnst/Common/ShellProcess.py
+++ b/lnst/Common/ShellProcess.py
@@ -229,7 +229,7 @@ class ShellProcess:
except:
return data
if r and (r[0][1] & select.POLLIN):
- new_data = os.read(fd, 1024)
+ new_data = os.read(fd, 1024).decode()
if not new_data:
return data
data += new_data
diff --git a/lnst/Common/Utils.py b/lnst/Common/Utils.py
index d6d6c57..4369278 100644
--- a/lnst/Common/Utils.py
+++ b/lnst/Common/Utils.py
@@ -197,7 +197,7 @@ def get_module_tools(module_path):
return tools
def recursive_dict_update(original, update):
- for key, value in update.iteritems():
+ for key, value in list(update.items()):
if isinstance(value, collections.Mapping):
r = recursive_dict_update(original.get(key, {}), value)
original[key] = r
@@ -240,7 +240,7 @@ def list_to_dot(original_list, prefix="", key=""):
def dict_to_dot(original_dict, prefix=""):
return_list = []
- for key, value in original_dict.iteritems():
+ for key, value in list(original_dict.items()):
if isinstance(value, collections.Mapping):
sub_list = dict_to_dot(value, prefix + key + '.')
return_list.extend(sub_list)
diff --git a/lnst/Controller/Machine.py b/lnst/Controller/Machine.py
index b24eb65..2379505 100644
--- a/lnst/Controller/Machine.py
+++ b/lnst/Controller/Machine.py
@@ -17,7 +17,7 @@ import os
import tempfile
import signal
from time import sleep
-from xmlrpclib import Binary
+from xmlrpc.client import Binary
from functools import wraps
from lnst.Common.Config import lnst_config
from lnst.Common.NetUtils import normalize_hwaddr
@@ -144,7 +144,7 @@ class Machine(object):
del self._device_database[update_msg["if_index"]]
def dev_db_get_name(self, dev_name):
- for if_index, dev in self._device_database.iteritems():
+ for if_index, dev in self._device_database.items():
if dev.get_name() == dev_name:
return dev
return None
@@ -308,7 +308,7 @@ class Machine(object):
self._slave_desc = slave_desc
devices = self._rpc_call("get_devices")
- for if_index, dev in devices.items():
+ for if_index, dev in list(devices.items()):
self._device_database[if_index] = Device(dev, self)
for iface in self._interfaces:
@@ -552,8 +552,8 @@ class Machine(object):
def sync_resources(self, required):
self._rpc_call("clear_resource_table")
- for res_type, resources in required.iteritems():
- for res_name, res in resources.iteritems():
+ for res_type, resources in required.items():
+ for res_name, res in resources.items():
has_resource = self._rpc_call("has_resource", res["hash"])
if not has_resource:
msg = "Transfering %s %s to machine %s" % \
@@ -889,7 +889,7 @@ class Interface(object):
"network_label": self._network,
"type": self._type,
"addresses": self._addresses,
- "slaves": self._slaves.keys(),
+ "slaves": list(self._slaves.keys()),
"options": self._options,
"slave_options": self._slave_options,
"master": None,
@@ -1422,14 +1422,24 @@ class Device(object):
@pre_call_decorate
def get_ip_addrs(self, selector={}):
- return [ip["addr"]
- for ip in self._ip_addrs
- if selector.items() <= ip.items()]
+ result = []
+ for addr in self._ip_addrs:
+ match = True
+ for sel_item, value in selector.items():
+ try:
+ if addr.get(sel_item, None) != value:
+ match = False
+ break
+ except:
+ match = False
+ if match:
+ result.append(addr)
+ return result
@pre_call_decorate
def get_ip_addr(self, num, selector={}):
ips = self.get_ip_addrs(selector)
- return ips[num]
+ return ips[num].get('addr')
@pre_call_decorate
def get_ifi_type(self):
diff --git a/lnst/Controller/NetTestController.py b/lnst/Controller/NetTestController.py
index 860f727..66a5cc2 100644
--- a/lnst/Controller/NetTestController.py
+++ b/lnst/Controller/NetTestController.py
@@ -15,7 +15,7 @@ import logging
import socket
import os
import re
-import cPickle
+import pickle
import imp
import copy
import sys
@@ -194,7 +194,7 @@ class NetTestController:
recipe = self._recipe
machines = self._machines
- for m_id in machines.keys():
+ for m_id in list(machines.keys()):
self._prepare_machine(m_id, resource_sync)
for machine_xml_data in recipe["machines"]:
@@ -243,9 +243,9 @@ class NetTestController:
logging.info("Pool match description:")
if sp.is_setup_virtual():
logging.info(" Setup is using virtual machines.")
- for m_id, m in sorted(match["machines"].iteritems()):
+ for m_id, m in sorted(match["machines"].items()):
logging.info(" host \"%s\" uses \"%s\"" % (m_id, m["target"]))
- for if_id, match in m["interfaces"].iteritems():
+ for if_id, match in m["interfaces"].items():
pool_id = match["target"]
logging.info(" interface \"%s\" matched to \"%s\"" %\
(if_id, pool_id))
@@ -521,7 +521,7 @@ class NetTestController:
if self._machines == None:
return
- for machine_id, machine in self._machines.iteritems():
+ for machine_id, machine in self._machines.items():
if machine.is_configured():
try:
machine.cleanup(deconfigure)
@@ -536,7 +536,7 @@ class NetTestController:
# remove dynamically created bridges
if deconfigure:
- for bridge in self._network_bridges.itervalues():
+ for bridge in self._network_bridges.values():
bridge.cleanup()
self._network_bridges = {}
@@ -544,7 +544,7 @@ class NetTestController:
#saves current virtual configuration to a file, after pickling it
config_data = dict()
machines = config_data["machines"] = {}
- for m in self._machines.itervalues():
+ for m in self._machines.values():
machine = machines[m.get_hostname()] = dict()
if m.get_libvirt_domain() != None:
@@ -560,12 +560,12 @@ class NetTestController:
machine["interfaces"].append(hwaddr)
config_data["bridges"] = bridges = []
- for bridge in self._network_bridges.itervalues():
+ for bridge in self._network_bridges.values():
bridges.append(bridge.get_name())
with open("/tmp/.lnst_machine_conf", "wb") as f:
os.fchmod(f.fileno(), 0o600)
- pickled_data = cPickle.dump(config_data, f)
+ pickled_data = pickle.dump(config_data, f)
@classmethod
def remove_saved_machine_config(cls):
@@ -573,7 +573,7 @@ class NetTestController:
cfg = None
try:
with open("/tmp/.lnst_machine_conf", "rb") as f:
- cfg = cPickle.load(f)
+ cfg = pickle.load(f)
except:
logging.info("No previous configuration found.")
return
@@ -581,7 +581,7 @@ class NetTestController:
if cfg:
logging.info("Cleaning up leftover configuration from previous "\
"config_only run.")
- for hostname, machine in cfg["machines"].iteritems():
+ for hostname, machine in cfg["machines"].items():
port = lnst_config.get_option("environment", "rpcport")
if test_tcp_connection(hostname, port):
s = socket.create_connection((hostname, port))
@@ -689,7 +689,7 @@ class NetTestController:
overall_res["err_msg"] = "Command exception raised."
break
- for machine in self._machines.itervalues():
+ for machine in self._machines.values():
machine.restore_system_config()
# task failed, check if we should quit_on_fail
@@ -790,13 +790,13 @@ class NetTestController:
def _start_packet_capture(self):
logging.info("Starting packet capture")
- for machine_id, machine in self._machines.iteritems():
+ for machine_id, machine in self._machines.items():
capture_files = machine.start_packet_capture()
self._remote_capture_files[machine_id] = capture_files
def _stop_packet_capture(self):
logging.info("Stopping packet capture")
- for machine_id, machine in self._machines.iteritems():
+ for machine_id, machine in self._machines.items():
machine.stop_packet_capture()
# TODO: Move this function to logging
@@ -804,7 +804,7 @@ class NetTestController:
logging_root = self._log_ctl.get_recipe_log_path()
logging_root = os.path.abspath(logging_root)
logging.info("Retrieving capture files from slaves")
- for machine_id, machine in self._machines.iteritems():
+ for machine_id, machine in self._machines.items():
slave_logging_dir = os.path.join(logging_root, machine_id + "/")
try:
os.mkdir(slave_logging_dir)
@@ -815,7 +815,7 @@ class NetTestController:
raise NetTestError(msg)
capture_files = self._remote_capture_files[machine_id]
- for if_id, remote_path in capture_files.iteritems():
+ for if_id, remote_path in capture_files.items():
filename = "%s.pcap" % if_id
local_path = os.path.join(slave_logging_dir, filename)
machine.copy_file_from_machine(remote_path, local_path)
@@ -896,11 +896,11 @@ class MessageDispatcher(ConnectionHandler):
def wait_for_result(self, machine_id):
wait = True
while wait:
- connected_slaves = self._connection_mapping.keys()
+ connected_slaves = list(self._connection_mapping.keys())
messages = self.check_connections()
- remaining_slaves = self._connection_mapping.keys()
+ remaining_slaves = list(self._connection_mapping.keys())
for msg in messages:
if msg[1]["type"] == "result" and msg[0] == machine_id:
diff --git a/lnst/Controller/NetTestResultSerializer.py b/lnst/Controller/NetTestResultSerializer.py
index e4fe086..7439c64 100644
--- a/lnst/Controller/NetTestResultSerializer.py
+++ b/lnst/Controller/NetTestResultSerializer.py
@@ -94,10 +94,10 @@ class NetTestResultSerializer:
"Setup is using virtual machines.",
""))
- for m_id, m in sorted(match["machines"].iteritems()):
+ for m_id, m in sorted(match["machines"].items()):
output_pairs.append((4*" " + "host \"%s\" uses \"%s\"" %\
(m_id, m["target"]), ""))
- for if_id, pool_if in m["interfaces"].iteritems():
+ for if_id, pool_if in m["interfaces"].items():
pool_id = pool_if["target"]
if "driver" in pool_if:
driver = pool_if["driver"]
@@ -227,12 +227,12 @@ class NetTestResultSerializer:
else:
match_el.setAttribute("virtual", "false")
- for m_id, m in match["machines"].iteritems():
+ for m_id, m in match["machines"].items():
m_el = doc.createElement("m_match")
m_el.setAttribute("host_id", str(m_id))
m_el.setAttribute("pool_id", str(m["target"]))
- for if_id, pool_id in m["interfaces"].iteritems():
+ for if_id, pool_id in m["interfaces"].items():
if_el = doc.createElement("if_match")
if_el.setAttribute("if_id", str(if_id))
if_el.setAttribute("pool_if_id", str(pool_id))
diff --git a/lnst/Controller/RecipeParser.py b/lnst/Controller/RecipeParser.py
index 742c144..d7f70ce 100644
--- a/lnst/Controller/RecipeParser.py
+++ b/lnst/Controller/RecipeParser.py
@@ -386,16 +386,16 @@ class RecipeParser(XmlParser):
distribution = False
valid_distributions = ["normal", "uniform", "pareto", "paretonormal"]
for opt in options:
- if "time" in opt.values():
+ if "time" in list(opt.values()):
valid = True
- elif "distribution" in opt.values():
+ elif "distribution" in list(opt.values()):
if opt["value"] not in valid_distributions:
raise RecipeError("netem: invalid distribution type", netem_tag)
else:
distribution = True
- elif "jitter" in opt.values():
+ elif "jitter" in list(opt.values()):
jitter = True
- elif "correlation" in opt.values():
+ elif "correlation" in list(opt.values()):
correlation = True
if not jitter:
if correlation or distribution:
@@ -404,22 +404,22 @@ class RecipeParser(XmlParser):
raise RecipeError("netem: time option is mandatory for <delay>", netem_tag)
elif netem_op == "loss":
for opt in options:
- if "percent" in opt.values():
+ if "percent" in list(opt.values()):
return
raise RecipeError("netem: percent option is mandatory for <loss>", netem_tag)
elif netem_op == "duplication":
for opt in options:
- if "percent" in opt.values():
+ if "percent" in list(opt.values()):
return
raise RecipeError("netem: percent option is mandatory for <duplication>", netem_tag)
elif netem_op == "corrupt":
for opt in options:
- if "percent" in opt.values():
+ if "percent" in list(opt.values()):
return
raise RecipeError("netem: percent option is mandatory for <corrupt>", netem_tag)
elif netem_op == "reordering":
for opt in options:
- if "percent" in opt.values():
+ if "percent" in list(opt.values()):
return
raise RecipeError("netem: percent option is mandatory for <reordering>", netem_tag)
diff --git a/lnst/Controller/SlavePool.py b/lnst/Controller/SlavePool.py
index 5069f50..8166816 100644
--- a/lnst/Controller/SlavePool.py
+++ b/lnst/Controller/SlavePool.py
@@ -49,7 +49,7 @@ class SlavePool:
self._mreqs = None
logging.info("Checking machine pool availability.")
- for pool_name, pool_dir in pools.items():
+ for pool_name, pool_dir in list(pools.items()):
self._pools[pool_name] = {}
self.add_dir(pool_name, pool_dir)
if len(self._pools[pool_name]) == 0:
@@ -85,13 +85,13 @@ class SlavePool:
dir_path))
max_len = 0
- for m_id in pool.keys():
+ for m_id in list(pool.keys()):
if len(m_id) > max_len:
max_len = len(m_id)
if self._pool_checks:
check_sockets = {}
- for m_id, m in sorted(pool.iteritems()):
+ for m_id, m in sorted(pool.items()):
hostname = m["params"]["hostname"]
if "rpc_port" in m["params"]:
port = m["params"]["rpc_port"]
@@ -123,7 +123,7 @@ class SlavePool:
check_sockets[s] = m_id
while len(check_sockets) > 0:
- rl, wl, el = select.select([], check_sockets.keys(), [])
+ rl, wl, el = select.select([], list(check_sockets.keys()), [])
for s in wl:
err = s.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR)
m_id = check_sockets[s]
@@ -137,7 +137,7 @@ class SlavePool:
s.close()
del check_sockets[s]
else:
- for m_id in pool.keys():
+ for m_id in list(pool.keys()):
pool[m_id]["available"] = True
for m_id in sorted(list(pool.keys())):
@@ -178,7 +178,7 @@ class SlavePool:
# Check if there isn't any machine with the same
# hostname or libvirt_domain already in the pool
- for pm_id, m in pool.iteritems():
+ for pm_id, m in pool.items():
pm = m["params"]
rm = machine_spec["params"]
if pm["hostname"] == rm["hostname"]:
@@ -228,7 +228,7 @@ class SlavePool:
raise SlaveMachineError(msg, iface)
if_hwaddr = iface_spec["params"]["hwaddr"]
- hwaddr_dups = [ k for k, v in machine_spec["interfaces"].iteritems()\
+ hwaddr_dups = [ k for k, v in machine_spec["interfaces"].items()\
if v["params"]["hwaddr"] == if_hwaddr ]
if len(hwaddr_dups) > 0:
msg = "Duplicate MAC address %s for interface '%s' and '%s'."\
@@ -348,7 +348,7 @@ class SlavePool:
used = []
if_map = self._map["machines"][tm_id]["interfaces"]
- for t_if, p_if in if_map.iteritems():
+ for t_if, p_if in if_map.items():
pool_id = p_if["target"]
used.append(pool_id)
if_data = pm["interfaces"][pool_id]
@@ -356,12 +356,12 @@ class SlavePool:
iface = machine.new_static_interface(t_if, "eth")
iface.set_hwaddr(if_data["params"]["hwaddr"])
- for t_net, p_net in self._map["networks"].iteritems():
+ for t_net, p_net in self._map["networks"].items():
if pm["interfaces"][pool_id]["network"] == p_net:
iface.set_network(t_net)
break
- for if_id, if_data in pm["interfaces"].iteritems():
+ for if_id, if_data in pm["interfaces"].items():
if if_id not in used:
iface = machine.new_unused_interface("eth")
iface.set_hwaddr(if_data["params"]["hwaddr"])
@@ -384,13 +384,13 @@ class SlavePool:
pm["security"])
# make all the existing unused
- for if_id, if_data in pm["interfaces"].iteritems():
+ for if_id, if_data in pm["interfaces"].items():
iface = machine.new_unused_interface("eth")
iface.set_hwaddr(if_data["params"]["hwaddr"])
iface.set_network(None)
# add all the other devices
- for if_id, if_data in tm["interfaces"].iteritems():
+ for if_id, if_data in tm["interfaces"].items():
iface = machine.new_virtual_interface(if_id, "eth")
iface.set_network(if_data["network"])
if "hwaddr" in if_data["params"]:
@@ -425,10 +425,10 @@ class SetupMapper(object):
def set_virtual(self, virt_value):
self._virtual_matching = virt_value
- for m_id, m in self._mreqs.iteritems():
- for if_id, interface in m["interfaces"].iteritems():
+ for m_id, m in self._mreqs.items():
+ for if_id, interface in m["interfaces"].items():
if "params" in interface:
- for name, val in interface["params"].iteritems():
+ for name, val in interface["params"].items():
if name not in ["hwaddr", "driver"]:
msg = "Dynamically created interfaces "\
"only support the 'hwaddr' and 'driver' "\
@@ -443,7 +443,7 @@ class SetupMapper(object):
def reset_match_state(self):
self._net_label_mapping = {}
self._machine_stack = []
- self._unmatched_req_machines = sorted(self._mreqs.keys(), reverse=True)
+ self._unmatched_req_machines = sorted(list(self._mreqs.keys()), reverse=True)
self._pool_stack = list(self._pools.keys())
if len(self._pool_stack) > 0:
@@ -451,7 +451,7 @@ class SetupMapper(object):
self._pool = self._pools[self._pool_name]
self._unmatched_pool_machines = []
- for p_id, p_machine in sorted(self._pool.iteritems(), reverse=True):
+ for p_id, p_machine in sorted(iter(self._pool.items()), reverse=True):
if self._virtual_matching:
if "libvirt_domain" in p_machine["params"]:
self._unmatched_pool_machines.append(p_id)
@@ -492,7 +492,7 @@ class SetupMapper(object):
#map compatible pool machine
stack_top["current_match"] = pool_m_id
stack_top["unmatched_pool_ifs"] = \
- sorted(self._pool[pool_m_id]["interfaces"].keys(),
+ sorted(list(self._pool[pool_m_id]["interfaces"].keys()),
reverse=True)
self._unmatched_pool_machines.remove(pool_m_id)
break
@@ -516,7 +516,7 @@ class SetupMapper(object):
self._pool_name)
self._unmatched_pool_machines = []
- for p_id, p_machine in sorted(self._pool.iteritems(), reverse=True):
+ for p_id, p_machine in sorted(iter(self._pool.items()), reverse=True):
if self._virtual_matching:
if "libvirt_domain" in p_machine["params"]:
self._unmatched_pool_machines.append(p_id)
@@ -591,7 +591,7 @@ class SetupMapper(object):
machine_match["if_stack"] = []
machine = self._mreqs[machine_match["m_id"]]
- machine_match["unmatched_ifs"] = sorted(machine["interfaces"].keys(),
+ machine_match["unmatched_ifs"] = sorted(list(machine["interfaces"].keys()),
reverse=True)
machine_match["unmatched_pool_ifs"] = []
@@ -621,7 +621,7 @@ class SetupMapper(object):
def _check_machine_compatibility(self, req_id, pool_id):
req_machine = self._mreqs[req_id]
pool_machine = self._pool[pool_id]
- for param, value in req_machine["params"].iteritems():
+ for param, value in req_machine["params"].items():
# skip empty parameters
if len(value) == 0:
continue
@@ -632,14 +632,14 @@ class SetupMapper(object):
def _check_interface_compatibility(self, req_if, pool_if):
label_mapping = self._net_label_mapping
- for req_label, mapping in label_mapping.iteritems():
+ for req_label, mapping in label_mapping.items():
if req_label == req_if["network"] and\
mapping[0] != pool_if["network"]:
return False
if mapping[0] == pool_if["network"] and\
req_label != req_if["network"]:
return False
- for param, value in req_if["params"].iteritems():
+ for param, value in req_if["params"].items():
# skip empty parameters
if len(value) == 0:
continue
@@ -652,7 +652,7 @@ class SetupMapper(object):
mapping = {"machines": {}, "networks": {}, "virtual": False,
"pool_name": self._pool_name}
- for req_label, label_map in self._net_label_mapping.iteritems():
+ for req_label, label_map in self._net_label_mapping.items():
mapping["networks"][req_label] = label_map[0]
for machine in self._machine_stack:
diff --git a/lnst/Controller/Task.py b/lnst/Controller/Task.py
index bc7670f..e2b98d0 100644
--- a/lnst/Controller/Task.py
+++ b/lnst/Controller/Task.py
@@ -45,7 +45,7 @@ class ControllerAPI(object):
self._perf_repo_api = PerfRepoAPI()
self._hosts = {}
- for host_id, host in hosts.iteritems():
+ for host_id, host in hosts.items():
self._hosts[host_id] = HostAPI(self, host_id, host)
def _run_command(self, command):
@@ -161,20 +161,20 @@ class ControllerAPI(object):
def get_configuration(self):
machines = self._ctl._machines
configuration = {}
- for m_id, m in machines.items():
+ for m_id, m in list(machines.items()):
configuration["machine_"+m_id] = m.get_configuration()
return configuration
def get_mapping(self):
match = self._ctl.get_pool_match()
mapping = []
- for m_id, m in match["machines"].iteritems():
+ for m_id, m in match["machines"].items():
machine = {}
machine["id"] = m_id
machine["pool_id"] = m["target"]
machine["hostname"] = m["hostname"]
machine["interface"] = []
- for i_id, i in m["interfaces"].iteritems():
+ for i_id, i in m["interfaces"].items():
interface = {}
interface["id"] = i_id
interface["pool_id"] = i["target"]
@@ -253,7 +253,7 @@ class HostAPI(object):
bg_id = None
cmd["netns"] = None
- for arg, argval in kwargs.iteritems():
+ for arg, argval in kwargs.items():
if arg == "bg" and argval == True:
self._bg_id_seq += 1
cmd["bg_id"] = bg_id = self._bg_id_seq
@@ -864,7 +864,7 @@ class ModuleAPI(object):
self._name = module_name
self._opts = {}
- for opt, val in options.iteritems():
+ for opt, val in options.items():
self._opts[opt] = []
if type(val) == list:
for v in val:
@@ -877,7 +877,7 @@ class ModuleAPI(object):
def set_options(self, options):
self._opts = {}
- for opt, val in options.iteritems():
+ for opt, val in options.items():
self._opts[opt] = []
if type(val) == list:
for v in val:
@@ -886,7 +886,7 @@ class ModuleAPI(object):
self._opts[opt].append({"value": str(val)})
def update_options(self, options):
- for opt, val in options.iteritems():
+ for opt, val in options.items():
self._opts[opt] = []
if type(val) == list:
for v in val:
diff --git a/lnst/Controller/Wizard.py b/lnst/Controller/Wizard.py
index dc82c21..85fb88b 100644
--- a/lnst/Controller/Wizard.py
+++ b/lnst/Controller/Wizard.py
@@ -106,7 +106,7 @@ class Wizard:
rv = self._check_path(pool_dir)
if rv == PATH_IS_DIR_ACCESSIBLE:
- print("Pool directory set to '%s'" % pool_dir)
+ print(("Pool directory set to '%s'" % pool_dir))
elif rv == PATH_DOES_NOT_EXIST:
sys.stderr.write("Path '%s' does not exist\n" % pool_dir)
pool_dir = self._create_dir(pool_dir)
@@ -124,7 +124,7 @@ class Wizard:
return
for host in hostlist:
- print("Processing host '%s'" % host)
+ print(("Processing host '%s'" % host))
hostname, port = self._parse_host(host)
if hostname == -1:
continue
@@ -222,7 +222,7 @@ class Wizard:
"""
while True:
if pool_dir is None:
- pool_dir = raw_input("Enter path to a pool directory "
+ pool_dir = input("Enter path to a pool directory "
"(default: '%s'): " % DefaultPoolDir)
if pool_dir == "":
pool_dir = DefaultPoolDir
@@ -230,7 +230,7 @@ class Wizard:
pool_dir = os.path.expanduser(pool_dir)
rv = self._check_path(pool_dir)
if rv == PATH_IS_DIR_ACCESSIBLE:
- print("Pool directory set to '%s'" % pool_dir)
+ print(("Pool directory set to '%s'" % pool_dir))
return pool_dir
elif rv == PATH_DOES_NOT_EXIST:
sys.stderr.write("Path '%s' does not exist\n"
@@ -270,7 +270,7 @@ class Wizard:
"""
try:
mkdir_p(pool_dir)
- print("Dir '%s' has been created" % pool_dir)
+ print(("Dir '%s' has been created" % pool_dir))
return pool_dir
except:
sys.stderr.write("Failed creating dir\n")
@@ -316,7 +316,7 @@ class Wizard:
if mode == "interactive":
msg = "Do you want to add interface '%s' (%s) to the "\
"recipe? [Y/n]: " % (iface["name"], iface["hwaddr"])
- answer = raw_input(msg)
+ answer = input(msg)
if mode == "noninteractive" or answer.lower() == "y"\
or answer == "":
interfaces_added += 1
@@ -359,7 +359,7 @@ class Wizard:
pubkey_el.appendChild(pubkey_text)
if self._write_to_file(pool_dir, filename, doc):
- print("File '%s/%s' successfuly created." % (pool_dir, filename))
+ print(("File '%s/%s' successfuly created." % (pool_dir, filename)))
else:
sys.stderr.write("File '%s/%s' could not be opened "
"or data written.\n" % (pool_dir, filename))
@@ -431,7 +431,7 @@ class Wizard:
""" Queries user for adding next machine
@return True if user wants to add another machine, False otherwise
"""
- answer = raw_input("Do you want to add another machine? [Y/n]: ")
+ answer = input("Do you want to add another machine? [Y/n]: ")
if answer.lower() == "y" or answer == "":
return True
else:
@@ -441,7 +441,7 @@ class Wizard:
""" Queries user for creating specified directory
@return True if user wants to create the directory, False otherwise
"""
- answer = raw_input("Create dir '%s'? [Y/n]: " % pool_dir)
+ answer = input("Create dir '%s'? [Y/n]: " % pool_dir)
if answer.lower() == 'y' or answer == "":
return True
else:
@@ -452,7 +452,7 @@ class Wizard:
@hostname Hostname of the machine which is used as default filename
@return Name of the file with .xml extension
"""
- output_file = raw_input("Enter the name of the output .xml file "
+ output_file = input("Enter the name of the output .xml file "
"(without .xml, default is '%s.xml'): "
% hostname)
if output_file == "":
@@ -465,7 +465,7 @@ class Wizard:
@return Valid (is translatable to an IP address) hostname
"""
while True:
- hostname = raw_input("Enter hostname: ")
+ hostname = input("Enter hostname: ")
if hostname == "":
sys.stderr.write("No hostname entered\n")
continue
@@ -485,7 +485,7 @@ class Wizard:
string representing hostname of the host
"""
while True:
- libvirt_domain = raw_input("Enter libvirt domain "
+ libvirt_domain = input("Enter libvirt domain "
"of virtual host: ")
if libvirt_domain == "":
sys.stderr.write("No domain entered\n")
@@ -524,7 +524,7 @@ class Wizard:
@return Integer representing port
"""
while True:
- port = raw_input("Enter port (default: %d): " % DefaultRPCPort)
+ port = input("Enter port (default: %d): " % DefaultRPCPort)
if port == "":
return DefaultRPCPort
else:
@@ -539,7 +539,7 @@ class Wizard:
@return Dictionary with the security parameters
"""
while True:
- auth_type = raw_input("Enter authentication type (default: none): ")
+ auth_type = input("Enter authentication type (default: none): ")
if auth_type == "":
auth_type = "none"
elif auth_type not in ["none", "no-auth", "password",
@@ -555,7 +555,7 @@ class Wizard:
return {"auth_type": "ssh"}
elif auth_type == "password":
while True:
- password = raw_input("Enter password: ")
+ password = input("Enter password: ")
if password == "":
sys.stderr.write("Invalid password.")
continue
@@ -564,19 +564,19 @@ class Wizard:
"auth_passwd": password}
elif auth_type == "pubkey":
while True:
- identity = raw_input("Enter identity: ")
+ identity = input("Enter identity: ")
if identity == "":
sys.stderr.write("Invalid identity.")
continue
break
while True:
- privkey = raw_input("Enter path to Ctl private key: ")
+ privkey = input("Enter path to Ctl private key: ")
if privkey == "" or not os.path.isfile(privkey):
sys.stderr.write("Invalid path to private key.")
continue
break
while True:
- srv_pubkey_path = raw_input("Enter path to Slave public key: ")
+ srv_pubkey_path = input("Enter path to Slave public key: ")
if srv_pubkey_path == "" or not os.path.isfile(srv_pubkey_path):
sys.stderr.write("Invalid path to public key.")
continue
diff --git a/lnst/Controller/XmlParser.py b/lnst/Controller/XmlParser.py
index 355b5e8..cbe3569 100644
--- a/lnst/Controller/XmlParser.py
+++ b/lnst/Controller/XmlParser.py
@@ -15,7 +15,7 @@ import re
import sys
import copy
from lxml import etree
-from urllib2 import urlopen
+from urllib.request import urlopen
from lnst.Common.Config import lnst_config
from lnst.Controller.XmlTemplates import XmlTemplates
from lnst.Controller.XmlProcessing import XmlProcessingError
@@ -124,7 +124,7 @@ class XmlParser(object):
return self._template_proc.expand_functions(text)
def _get_content(self, element):
- text = etree.tostring(element, method="text").strip()
+ text = etree.tostring(element, method="text", encoding="unicode").strip()
return self._template_proc.expand_functions(text)
def _expand_xinclude(self, elem, base_url=""):
diff --git a/lnst/Controller/XmlProcessing.py b/lnst/Controller/XmlProcessing.py
index b80c3a3..8e2bb00 100644
--- a/lnst/Controller/XmlProcessing.py
+++ b/lnst/Controller/XmlProcessing.py
@@ -75,8 +75,8 @@ class XmlDataIterator:
def __iter__(self):
return self
- def next(self):
- n = self._iterator.next()
+ def __next__(self):
+ n = next(self._iterator)
# For normal iterators
if type(n) == XmlTemplateString:
@@ -165,20 +165,20 @@ class XmlData(dict):
return XmlDataIterator(it)
def iteritems(self):
- it = super(XmlData, self).iteritems()
+ it = iter(super(XmlData, self).items())
return XmlDataIterator(it)
def iterkeys(self):
- it = super(XmlData, self).iterkeys()
+ it = iter(super(XmlData, self).keys())
return XmlDataIterator(it)
def itervalues(self):
- it = super(XmlData, self).itervalues()
+ it = iter(super(XmlData, self).values())
return XmlDataIterator(it)
def to_dict(self):
new_dict = dict()
- for key, value in self.iteritems():
+ for key, value in self.items():
if isinstance(value, XmlData):
new_val = value.to_dict()
elif isinstance(value, XmlCollection):
diff --git a/lnst/Controller/XmlTemplates.py b/lnst/Controller/XmlTemplates.py
index b1bf9a1..4068e85 100644
--- a/lnst/Controller/XmlTemplates.py
+++ b/lnst/Controller/XmlTemplates.py
@@ -239,7 +239,7 @@ class XmlTemplates:
"""
defs = {}
for level in self._definitions:
- for name, val in level.iteritems():
+ for name, val in level.items():
defs[name] = val
return defs
@@ -255,7 +255,7 @@ class XmlTemplates:
def set_aliases(self, defined, overriden):
""" Set aliases defined or overriden from CLI """
- for name, value in defined.iteritems():
+ for name, value in defined.items():
self.define_alias(name, value)
self._overriden_aliases = overriden
@@ -325,7 +325,7 @@ class XmlTemplates:
if element.tail != None:
element.tail = self.expand_aliases(element.tail)
- for name, value in element.attrib.iteritems():
+ for name, value in element.attrib.items():
element.set(name, self.expand_aliases(value))
if element.tag == "define":
diff --git a/lnst/Slave/InterfaceManager.py b/lnst/Slave/InterfaceManager.py
index a96ddd1..47c1e2a 100644
--- a/lnst/Slave/InterfaceManager.py
+++ b/lnst/Slave/InterfaceManager.py
@@ -79,7 +79,7 @@ class InterfaceManager(object):
self._id_mapping = {}
def get_id_by_if_index(self, if_index):
- for if_id, index in self._id_mapping.iteritems():
+ for if_id, index in self._id_mapping.items():
if if_index == index:
return if_id
return None
@@ -97,12 +97,12 @@ class InterfaceManager(object):
return self._nl_socket
def rescan_devices(self):
- devices_to_remove = self._devices.keys()
+ devices_to_remove = list(self._devices.keys())
devs = scan_netdevs()
for dev in devs:
if dev['index'] not in self._devices:
device = None
- for if_id, d in self._tmp_mapping.items():
+ for if_id, d in list(self._tmp_mapping.items()):
d_cfg = d.get_conf_dict()
if d_cfg["name"] == dev["name"]:
device = d
@@ -134,7 +134,7 @@ class InterfaceManager(object):
del self._devices[i]
self._dl_manager.rescan_ports()
- for device in self._devices.values():
+ for device in list(self._devices.values()):
dl_port = self._dl_manager.get_port(device.get_name())
device.set_devlink(dl_port)
@@ -143,7 +143,7 @@ class InterfaceManager(object):
self._handle_netlink_msg(msg)
self._dl_manager.rescan_ports()
- for device in self._devices.values():
+ for device in list(self._devices.values()):
dl_port = self._dl_manager.get_port(device.get_name())
device.set_devlink(dl_port)
@@ -152,14 +152,14 @@ class InterfaceManager(object):
if msg['index'] in self._devices:
update_msg = self._devices[msg['index']].update_netlink(msg)
if update_msg != None:
- for if_id, if_index in self._id_mapping.iteritems():
+ for if_id, if_index in self._id_mapping.items():
if if_index == msg['index']:
update_msg["if_id"] = if_id
break
self._server_handler.send_data_to_ctl(update_msg)
elif msg['header']['type'] == RTM_NEWLINK:
dev = None
- for if_id, d in self._tmp_mapping.items():
+ for if_id, d in list(self._tmp_mapping.items()):
d_cfg = d.get_conf_dict()
if d_cfg["name"] == msg.get_attr("IFLA_IFNAME"):
dev = d
@@ -172,7 +172,7 @@ class InterfaceManager(object):
self._devices[msg['index']] = dev
if update_msg != None:
- for if_id, if_index in self._id_mapping.iteritems():
+ for if_id, if_index in self._id_mapping.items():
if if_index == msg['index']:
update_msg["if_id"] = if_id
break
@@ -202,7 +202,7 @@ class InterfaceManager(object):
def get_mapped_devices(self):
ret = {}
- for if_id, if_index in self._id_mapping.iteritems():
+ for if_id, if_index in self._id_mapping.items():
ret[if_id] = self._devices[if_index]
for if_id in self._tmp_mapping:
ret[if_id] = self._tmp_mapping[if_id]
@@ -215,20 +215,20 @@ class InterfaceManager(object):
return None
def get_devices(self):
- return self._devices.values()
+ return list(self._devices.values())
def get_device_by_hwaddr(self, hwaddr):
- for dev in self._devices.values():
+ for dev in list(self._devices.values()):
if dev.get_hwaddr() == hwaddr:
return dev
return None
def get_device_by_params(self, params):
matched = None
- for dev in self._devices.values():
+ for dev in list(self._devices.values()):
matched = dev
dev_data = dev.get_if_data()
- for key, value in params.iteritems():
+ for key, value in params.items():
if key not in dev_data or dev_data[key] != value:
matched = None
break
@@ -239,7 +239,7 @@ class InterfaceManager(object):
return matched
def deconfigure_all(self):
- for dev in self._devices.itervalues():
+ for dev in self._devices.values():
dev.clear_configuration()
def create_device_from_config(self, if_id, config):
@@ -288,10 +288,10 @@ class InterfaceManager(object):
def _is_name_used(self, name):
self.rescan_devices()
- for device in self._devices.itervalues():
+ for device in self._devices.values():
if name == device.get_name():
return True
- for device in self._tmp_mapping.itervalues():
+ for device in self._tmp_mapping.values():
if name == device.get_name():
return True
return False
@@ -487,7 +487,7 @@ class Device(object):
def find_addrs(self, addr_spec):
ret = []
for addr in self._ip_addrs:
- if addr_spec.items() <= addr.items():
+ if list(addr_spec.items()) <= list(addr.items()):
ret.append(addr)
return ret
@@ -703,7 +703,7 @@ class Device(object):
if (line.split()[0] == 'vf'):
break
if (line.split()[0] == "RX:"):
- rx_stats = map(int, lines.next().split())
+ rx_stats = list(map(int, next(lines).split()))
stats.update({"rx_bytes" : rx_stats[0],
"rx_packets": rx_stats[1],
"rx_errors" : rx_stats[2],
@@ -711,7 +711,7 @@ class Device(object):
"rx_overrun": rx_stats[4],
"rx_mcast" : rx_stats[5]})
if (line.split()[0] == "TX:"):
- tx_stats = map(int, lines.next().split())
+ tx_stats = list(map(int, next(lines).split()))
stats.update({"tx_bytes" : tx_stats[0],
"tx_packets": tx_stats[1],
"tx_errors" : tx_stats[2],
@@ -742,7 +742,7 @@ class Device(object):
stats_data[i] = stats_data[i].replace("K", "000")
stats_data[i] = stats_data[i].replace("M", "000000")
- stats_data = map(int, stats_data)
+ stats_data = list(map(int, stats_data))
stats["rx_packets"] = stats_data[0]
stats["tx_packets"] = stats_data[2]
stats["rx_bytes"] = stats_data[4]
diff --git a/lnst/Slave/NetConfigDevice.py b/lnst/Slave/NetConfigDevice.py
index 03efd40..dec992f 100644
--- a/lnst/Slave/NetConfigDevice.py
+++ b/lnst/Slave/NetConfigDevice.py
@@ -585,7 +585,7 @@ class NetConfigDeviceOvsBridge(NetConfigDeviceGeneric):
br_name = self._dev_config["name"]
bond_ports = []
- for bond in self._dev_config["ovs_conf"]["bonds"].itervalues():
+ for bond in self._dev_config["ovs_conf"]["bonds"].values():
for slave_id in bond["slaves"]:
bond_ports.append(slave_id)
@@ -600,7 +600,7 @@ class NetConfigDeviceOvsBridge(NetConfigDeviceGeneric):
options += " %s=%s" % (opt[0], opt[1])
vlan_tags = []
- for tag, vlan in vlans.iteritems():
+ for tag, vlan in vlans.items():
if slave_id in vlan["slaves"]:
vlan_tags.append(tag)
if len(vlan_tags) == 0:
@@ -620,7 +620,7 @@ class NetConfigDeviceOvsBridge(NetConfigDeviceGeneric):
br_name = self._dev_config["name"]
bond_ports = []
- for bond in self._dev_config["ovs_conf"]["bonds"].itervalues():
+ for bond in self._dev_config["ovs_conf"]["bonds"].values():
for slave_id in bond["slaves"]:
bond_ports.append(slave_id)
@@ -687,7 +687,7 @@ class NetConfigDeviceOvsBridge(NetConfigDeviceGeneric):
br_name = self._dev_config["name"]
bonds = self._dev_config["ovs_conf"]["bonds"]
- for bond_id, bond in bonds.iteritems():
+ for bond_id, bond in bonds.items():
ifaces = ""
for slave_id in bond["slaves"]:
slave_dev = self._if_manager.get_mapped_device(slave_id)
@@ -703,7 +703,7 @@ class NetConfigDeviceOvsBridge(NetConfigDeviceGeneric):
br_name = self._dev_config["name"]
bonds = self._dev_config["ovs_conf"]["bonds"]
- for bond_id, bond in bonds.iteritems():
+ for bond_id, bond in bonds.items():
exec_cmd("ovs-vsctl del-port %s %s" % (br_name, bond_id))
def _add_flow_entries(self):
diff --git a/lnst/Slave/NetTestSlave.py b/lnst/Slave/NetTestSlave.py
index 662faaf..7c61d23 100644
--- a/lnst/Slave/NetTestSlave.py
+++ b/lnst/Slave/NetTestSlave.py
@@ -22,7 +22,7 @@ import multiprocessing
import re
import struct
from time import sleep, time
-from xmlrpclib import Binary
+from xmlrpc.client import Binary
from tempfile import NamedTemporaryFile
from lnst.Common.Logs import log_exc_traceback
from lnst.Common.PacketCapture import PacketCapture
@@ -181,7 +181,7 @@ class SlaveMethods:
dev_data = dev.get_if_data()
entry = {"name": dev.get_name(),
"hwaddr": dev.get_hwaddr()}
- for key, value in params.iteritems():
+ for key, value in params.items():
if key not in dev_data or dev_data[key] != value:
entry = None
break
@@ -405,7 +405,7 @@ class SlaveMethods:
raise Exception("Can't start packet capture, tcpdump not available")
files = {}
- for if_id, dev in self._if_manager.get_mapped_devices().iteritems():
+ for if_id, dev in self._if_manager.get_mapped_devices().items():
if dev.get_netns() != None:
continue
dev_name = dev.get_name()
@@ -431,7 +431,7 @@ class SlaveMethods:
if self._packet_captures == None:
return True
- for if_index, pcap in self._packet_captures.iteritems():
+ for if_index, pcap in self._packet_captures.items():
pcap.stop()
self._packet_captures.clear()
@@ -439,7 +439,7 @@ class SlaveMethods:
return True
def _remove_capture_files(self):
- for key, name in self._capture_files.iteritems():
+ for key, name in self._capture_files.items():
logging.debug("Removing temporary packet capture file %s", name)
os.unlink(name)
@@ -462,7 +462,7 @@ class SlaveMethods:
def restore_system_config(self):
logging.info("Restoring system configuration")
- for option, values in self._system_config.iteritems():
+ for option, values in self._system_config.items():
try:
cmd_str = "echo \"%s\" >%s" % (values["initial_val"], option)
(stdout, stderr) = exec_cmd(cmd_str)
@@ -538,7 +538,7 @@ class SlaveMethods:
logging.info("Performing machine cleanup.")
self._command_context.cleanup()
- for mroute_soc in self.mroute_sockets.values():
+ for mroute_soc in list(self.mroute_sockets.values()):
mroute_soc.close()
del mroute_soc
self.mroute_sockets = {}
@@ -546,7 +546,7 @@ class SlaveMethods:
self.restore_system_config()
devs = self._if_manager.get_mapped_devices()
- for if_id, dev in devs.iteritems():
+ for if_id, dev in devs.items():
peer = dev.get_peer()
if peer == None:
dev.clear_configuration()
@@ -557,7 +557,7 @@ class SlaveMethods:
self._if_manager.deconfigure_all()
- for netns in self._net_namespaces.keys():
+ for netns in list(self._net_namespaces.keys()):
self.del_namespace(netns)
self._net_namespaces = {}
@@ -641,11 +641,11 @@ class SlaveMethods:
return False
def reset_file_transfers(self):
- for file_handle in self._copy_targets.itervalues():
+ for file_handle in self._copy_targets.values():
file_handle.close()
self._copy_targets = {}
- for file_handle in self._copy_sources.itervalues():
+ for file_handle in self._copy_sources.values():
file_handle.close()
self._copy_sources = {}
@@ -1086,7 +1086,7 @@ class SlaveMethods:
return True
def mroute_operation(self, op_type, op, table_id):
- if not self.mroute_sockets.has_key(table_id):
+ if table_id not in self.mroute_sockets:
logging.error("mroute %s table was not init", table_id)
return False
try:
@@ -1098,7 +1098,7 @@ class SlaveMethods:
def mroute_init(self, table_id):
logging.debug("Initializing mroute socket")
- if not self.mroute_sockets.has_key(table_id):
+ if table_id not in self.mroute_sockets:
self.mroute_sockets[table_id] = socket.socket(socket.AF_INET,
socket.SOCK_RAW,
socket.IPPROTO_IGMP)
@@ -1157,7 +1157,7 @@ class SlaveMethods:
(source, group, str(out_vifs)))
ttls = [0] * MROUTE.MAX_VIF
- for vif, ttl in out_vifs.items():
+ for vif, ttl in list(out_vifs.items()):
if vif >= MROUTE.MAX_VIF:
logging.error("ilegal VIF was asked")
return False
@@ -1183,7 +1183,7 @@ class SlaveMethods:
return self.mroute_operation(op_type, mfc_struct, table_id)
def mroute_get_notif(self, table_id):
- if not self.mroute_sockets.has_key(table_id):
+ if table_id not in self.mroute_sockets:
logging.error("mroute table %s was not init", table_id)
return False
try:
@@ -1340,7 +1340,7 @@ class ServerHandler(ConnectionHandler):
self._netns_con_mapping = {}
def update_connections(self, connections):
- for key, connection in connections.iteritems():
+ for key, connection in connections.items():
self.remove_connection_by_id(key)
self.add_connection(key, connection)
@@ -1392,21 +1392,23 @@ class NetTestSlave:
self._if_manager.get_nl_socket())
def run(self):
- while not self._finished:
- if self._server_handler.get_ctl_sock() == None:
- self._log_ctl.cancel_connection()
- try:
- logging.info("Waiting for connection.")
- self._server_handler.accept_connection()
- except (socket.error, SecSocketException):
- continue
- self._log_ctl.set_connection(
- self._server_handler.get_ctl_sock())
-
- msgs = self._server_handler.get_messages()
-
- for msg in msgs:
- self._process_msg(msg[1])
+ while True:
+ try:
+ if self._server_handler.get_ctl_sock() is None:
+ self._log_ctl.cancel_connection()
+ try:
+ logging.info("Waiting for connection.")
+ self._server_handler.accept_connection()
+ except (socket.error, SecSocketException):
+ continue
+ self._log_ctl.set_connection(self._server_handler.get_ctl_sock())
+
+ msgs = self._server_handler.get_messages()
+
+ for msg in msgs:
+ self._process_msg(msg[1])
+ except:
+ break
self._methods.machine_cleanup()
@@ -1507,7 +1509,7 @@ class NetTestSlave:
def _signal_die_handler(self, signum, frame):
logging.info("Caught signal %d -> dying" % signum)
- self._finished = True
+ raise Exception("Recieved interrupt to system call")
def _parent_resend_signal_handler(self, signum, frame):
logging.info("Caught signal %d -> resending to parent" % signum)
diff --git a/lnst/Slave/NmConfigDevice.py b/lnst/Slave/NmConfigDevice.py
index 29c2c75..74b4f85 100644
--- a/lnst/Slave/NmConfigDevice.py
+++ b/lnst/Slave/NmConfigDevice.py
@@ -218,10 +218,7 @@ class NmConfigDeviceGeneric(object):
except:
#IPv6 conversion into a 16 byte array
tmp = socket.inet_pton(socket.AF_INET6, ip)
- ip = []
- for i in tmp:
- ip.append(ord(i))
- ip = dbus.Array(ip, signature='y')
+ ip = dbus.Array(tmp, signature='y')
def_gateway = dbus.Array([0]*16, signature='y')
ipv6s.append(tuple([ip,
dbus.UInt32(mask),
diff --git a/setup.py b/setup.py
index cc5d6f4..ebb05c4 100755
--- a/setup.py
+++ b/setup.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python2
+#!/usr/bin/env python3
"""
Install script for lnst
@@ -34,7 +34,7 @@ def process_template(template_path, values):
t = open(template_path, "r")
f = open(file_path, "w")
template = t.read()
- for var, value in values.iteritems():
+ for var, value in values.items():
template = template.replace("@%s@" % var, value)
f.write(template)
f.close()
diff --git a/test_modules/Multicast.py b/test_modules/Multicast.py
index b366c9b..44a003c 100644
--- a/test_modules/Multicast.py
+++ b/test_modules/Multicast.py
@@ -61,7 +61,7 @@ class Multicast(TestGeneric):
cmd = "./{0} ".format(setup)
- for optname, optval in opts.iteritems():
+ for optname, optval in list(opts.items()):
if optval != None:
cmd += "--{0} \"{1}\" ".format(optname, optval)
diff --git a/test_modules/Netperf.py b/test_modules/Netperf.py
index 00692e5..7e9fbd0 100644
--- a/test_modules/Netperf.py
+++ b/test_modules/Netperf.py
@@ -487,7 +487,7 @@ class Netperf(TestGeneric):
results.append(result)
rates.append(results[-1]["rate"])
- if results > 1:
+ if len(results) > 1:
res_data["results"] = results
if len(rates) > 0:
diff --git a/test_modules/TRexClient.py b/test_modules/TRexClient.py
index f3ed2ee..60416e8 100644
--- a/test_modules/TRexClient.py
+++ b/test_modules/TRexClient.py
@@ -19,7 +19,7 @@ class TRexClient(TestGeneric):
super(TRexClient, self).__init__(command)
self._trex_path = self.get_mopt("trex_path")
- self._ports = map(int, self.get_multi_mopt("ports"))
+ self._ports = list(map(int, self.get_multi_mopt("ports")))
self._src_macs = []
self._dst_macs = []
@@ -42,7 +42,7 @@ class TRexClient(TestGeneric):
for res in results:
new_results.append({})
new_res = new_results[-1]
- for key, data in res.items():
+ for key, data in list(res.items()):
if key in self._ports:
new_res["port_"+str(key)] = data
else:
--
2.17.1
5 years
[PATCH-next 00/13] lnst.Recipes.ENRT: add phase1 ported recipes
by csfakian@redhat.com
From: Christos Sfakianakis <csfakian(a)redhat.com>
There is 1-1 correspondence between these recipes and the old
regression_tests/phase1/ xml files.
Christos Sfakianakis (13):
lnst.Recipes.ENRT: add ActiveBackupBondRecipe
lnst.Recipes.ENRT: add ActiveBackupDoubleBondRecipe
lnst.Recipes.ENRT: add RoundRobinBondRecipe
lnst.Recipes.ENRT: add RoundRobinDoubleBondRecipe
lnst.Recipes.ENRT: add PingFloodRecipe
lnst.Recipes.ENRT: add VirtualBridgeVlanInGuestRecipe
lnst.Recipes.ENRT: add VirtualBridgeVlanInGuestMirroredRecipe
lnst.Recipes.ENRT: add VirtualBridgeVlanInHostMirroredRecipe
lnst.Recipes.ENRT: add VirtualBridgeVlanInHostRecipe
lnst.Recipes.ENRT: add Vlans3OverActiveBackupBondRecipe
lnst.Recipes.ENRT: add Vlans3OverRoundRobinBondRecipe
lnst.Recipes.ENRT: add Vlans3Recipe
lnst.Recipes.ENRT: add VirtualBridge2VlansOverBondRecipe
lnst/Recipes/ENRT/ActiveBackupBondRecipe.py | 68 ++++++++++
.../ENRT/ActiveBackupDoubleBondRecipe.py | 67 ++++++++++
lnst/Recipes/ENRT/PingFloodRecipe.py | 33 +++++
lnst/Recipes/ENRT/RoundRobinBondRecipe.py | 68 ++++++++++
.../ENRT/RoundRobinDoubleBondRecipe.py | 67 ++++++++++
.../ENRT/VirtualBridge2VlansOverBondRecipe.py | 125 ++++++++++++++++++
.../VirtualBridgeVlanInGuestMirroredRecipe.py | 107 +++++++++++++++
.../ENRT/VirtualBridgeVlanInGuestRecipe.py | 90 +++++++++++++
.../VirtualBridgeVlanInHostMirroredRecipe.py | 105 +++++++++++++++
.../ENRT/VirtualBridgeVlanInHostRecipe.py | 88 ++++++++++++
.../ENRT/Vlans3OverActiveBackupBondRecipe.py | 92 +++++++++++++
.../ENRT/Vlans3OverRoundRobinBondRecipe.py | 92 +++++++++++++
lnst/Recipes/ENRT/Vlans3Recipe.py | 85 ++++++++++++
13 files changed, 1087 insertions(+)
create mode 100644 lnst/Recipes/ENRT/ActiveBackupBondRecipe.py
create mode 100644 lnst/Recipes/ENRT/ActiveBackupDoubleBondRecipe.py
create mode 100644 lnst/Recipes/ENRT/PingFloodRecipe.py
create mode 100644 lnst/Recipes/ENRT/RoundRobinBondRecipe.py
create mode 100644 lnst/Recipes/ENRT/RoundRobinDoubleBondRecipe.py
create mode 100644 lnst/Recipes/ENRT/VirtualBridge2VlansOverBondRecipe.py
create mode 100644 lnst/Recipes/ENRT/VirtualBridgeVlanInGuestMirroredRecipe.py
create mode 100644 lnst/Recipes/ENRT/VirtualBridgeVlanInGuestRecipe.py
create mode 100644 lnst/Recipes/ENRT/VirtualBridgeVlanInHostMirroredRecipe.py
create mode 100644 lnst/Recipes/ENRT/VirtualBridgeVlanInHostRecipe.py
create mode 100644 lnst/Recipes/ENRT/Vlans3OverActiveBackupBondRecipe.py
create mode 100644 lnst/Recipes/ENRT/Vlans3OverRoundRobinBondRecipe.py
create mode 100644 lnst/Recipes/ENRT/Vlans3Recipe.py
--
2.17.1
5 years, 1 month
答复:提高工厂成本管理的各项技能
by ������
Message-ID: <1825118251182511825118251182511825118251182511825118251>
From: .lnst-developers.<lnst-developers(a)lists.fedorahosted.org>
To: <drgkui(a)mandaemployment.com>
������ ���� �� �� �� �� �� �� �� ������ ��
2018-08-30
08:38:39
5 years, 1 month
[PATCH-next 00/40] porting of OvS DPDK PvP recipe
by olichtne@redhat.com
From: Ondrej Lichtner <olichtne(a)redhat.com>
What follows is a big patch set that is the result of my work to port
the phase3/ovs_dpdk_pvp recipe to python. The recipe should be ported in
almost complete support of everything that the old recipe did (except
for result evaluation) and includes a significant refactoring of the
code.
Instead of working with a hackish ssh tunnel to manipulate the guest the
patch set also introduces a new feature allowing the tester to connect
to a LNST Slave during test execution. This significantly improves
working with the guest since we can now safely wrap the testpmd process
into a test module and have a comfortable interface for it. This removes
the python paramiko library dependency of the recipe.
Another significant improvement is a better way to wrap the TRex
generator into test modules that avoids the use of a tmux session we've
used previously this means that the recipe no longer has this
dependency.
Finally the patchset also includes a new feature of being able to
synchronize and use arbitrary classes in the lnst.RecipeCommon package
into any slave. This gives the tester a very nice interface to use for
extending the slave functionality. In the OvS_DPDK_PvP recipe this is
used to interface with libvirt on the slave machine. This means that the
controller can now run on any machine and be able to work with libvirt
on any other slave instead of the confusing forced binding we've had
before.
On top of these new features, the patch set also includes other smaller
features, a lot of bug fixes and refactoring of some classes.
-Ondrej
Ondrej Lichtner (40):
lnst.Common.DeviceError: define the DeviceReadOnly exception
lnst.Devices.RemoteDevice: add caching capability
lnst.Slave.NetTestSlave: refactor names of dev_*attr methods
lnst.Device.Device: improve cleanup data storage
lnst.Device.Device: raise DeviceDeleted exception on netlink updates
lnst.Device.Device: add bus_info property
lnst.Slave.Job: catch all exceptions from Test Modules
lnst.Slave.Job: cleanup kill only running jobs
move lnst.Common.TestModule to lnst.Tests.BaseTestModule, add
wait_for_interrupt
lnst.Common.Parameters: allow deletions for the Parameters class
lnst.Controller.MachineMapper: sort interfaces in machine descriptions
lnst.Controller.Machine: add mapped boolean
lnst.Controller.MessageDispatcher: refactor wait_* methods
lnst.Controller.Machine: expose init_connection as public method
lnst.Controller.SlavePoolManager: enable machines without interfaces
lnst.Controller.Machine: split set_recipe into prepare_machine and
start_recipe
lnst.Controller.Machine: move VirtualDevice cleanup to Controller
lnst.Controller.Machine: refactor sending classes to Slaves
lnst.Slave.NetTestSlave: track dynamic classes by module name as well
lnst.Slave.NetTestSlave: create the dynamic RecipeCommon module
lnst.Slave.NetTestSlave: support objects from dynamically received
classes
add lnst.Controller.SlaveObject
add lnst.Controller.RecipeControl
lnst.Controller.Host: expose the map_device api to the tester
lnst.Controller.Requirements: add RecipeParam class
lnst.Controller.RunSummaryFormatter: change format for list items
lnst.Controller.Machine: small refactoring
lnst.Slave.InterfaceManager: fix deleted device handling
lnst.RecipeCommon.PerfResult: fix standard deviation calculation
lnst.RecipeCommon.PerfResult: override std_deviation of PerfInterval
and add string descrition
lnst.RecipeCommon.Ping: add parameters to PingConf
lnst.RecipeCommon.Perf: minor refactoring
setup.py: use setuptools instead of distutils and improve package
management
add lnst.RecipeCommon.LibvirtControl
add lnst.Tests.TRex
add lnst.Tests.TestPMD
add lnst.RecipeCommon.TRexMeasurementTool
add lnst.Recipes.ENRT.OvS_DPDK_PvP
lnst.Recipes.ENRT.BaseEnrtRecipe: fix indentation
lnst.Slave.InterfaceManager: disable bulk mode after device creation
lnst/Common/DeviceError.py | 3 +
lnst/Common/Parameters.py | 3 +
lnst/Controller/Controller.py | 41 +-
lnst/Controller/Host.py | 8 +-
lnst/Controller/Job.py | 2 +-
lnst/Controller/Machine.py | 159 +++----
lnst/Controller/MachineMapper.py | 2 +-
lnst/Controller/MessageDispatcher.py | 116 +++--
lnst/Controller/Recipe.py | 13 +-
lnst/Controller/RecipeControl.py | 64 +++
lnst/Controller/Requirements.py | 47 ++-
lnst/Controller/RunSummaryFormatter.py | 14 +-
lnst/Controller/SlaveObject.py | 41 ++
lnst/Controller/SlavePoolManager.py | 7 +-
lnst/Controller/__init__.py | 2 +-
lnst/Devices/Device.py | 48 ++-
lnst/Devices/RemoteDevice.py | 36 +-
lnst/RecipeCommon/LibvirtControl.py | 41 ++
lnst/RecipeCommon/Perf.py | 90 ++--
lnst/RecipeCommon/PerfResult.py | 10 +-
lnst/RecipeCommon/Ping.py | 35 +-
lnst/RecipeCommon/TRexMeasurementTool.py | 87 ++++
lnst/Recipes/ENRT/BaseEnrtRecipe.py | 14 +-
lnst/Recipes/ENRT/OvS_DPDK_PvP.py | 399 ++++++++++++++++++
lnst/Slave/InterfaceManager.py | 14 +-
lnst/Slave/Job.py | 6 +-
lnst/Slave/NetTestSlave.py | 81 +++-
.../TestModule.py => Tests/BaseTestModule.py} | 20 +
lnst/Tests/Iperf.py | 2 +-
lnst/Tests/Netperf.py | 3 +-
lnst/Tests/Ping.py | 2 +-
lnst/Tests/TRex.py | 159 +++++++
lnst/Tests/TestPMD.py | 47 +++
setup.py | 6 +-
34 files changed, 1356 insertions(+), 266 deletions(-)
create mode 100644 lnst/Controller/RecipeControl.py
create mode 100644 lnst/Controller/SlaveObject.py
create mode 100644 lnst/RecipeCommon/LibvirtControl.py
create mode 100644 lnst/RecipeCommon/TRexMeasurementTool.py
create mode 100644 lnst/Recipes/ENRT/OvS_DPDK_PvP.py
rename lnst/{Common/TestModule.py => Tests/BaseTestModule.py} (85%)
create mode 100644 lnst/Tests/TRex.py
create mode 100644 lnst/Tests/TestPMD.py
--
2.17.0
5 years, 1 month
代开真发票13883169292
by tersdfgjedh@jyt.com
你好
有正规住宿费,餐费,广告费,建筑材料,工程,技术服务费等各行业发票可以开
价格优惠,有需要请找:13883169292 郝立恒 (加微信同号)
5 years, 1 month
[PATCH] recipes: short_lived_connections: add official_result alias
by Jan Tluka
The test now takes alias official_result. If set to yes, the perfrepo hash
will be added to the saved test execution.
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
recipes/regression_tests/phase3/short_lived_connections.py | 5 +++--
recipes/regression_tests/phase3/short_lived_connections.xml | 1 +
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/recipes/regression_tests/phase3/short_lived_connections.py b/recipes/regression_tests/phase3/short_lived_connections.py
index 68ecfca..b2d78dd 100644
--- a/recipes/regression_tests/phase3/short_lived_connections.py
+++ b/recipes/regression_tests/phase3/short_lived_connections.py
@@ -38,6 +38,7 @@ nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
nperf_debug = ctl.get_alias("nperf_debug")
nperf_max_dev = ctl.get_alias("nperf_max_dev")
pr_user_comment = ctl.get_alias("perfrepo_comment")
+official_result = bool_it(ctl.get_alias("official_result"))
adaptive_coalescing_off = bool_it(ctl.get_alias("adaptive_coalescing_off"))
m1_testiface = m1.get_interface("testiface")
@@ -152,7 +153,7 @@ for size in ["1K,1K", "5K,5K", "7K,7K", "10K,10K", "12K,12K"]:
netperf_result_template(result_tcp_rr, tcp_rr_res_data, test_type="RR")
result_tcp_rr.set_comment(pr_comment)
- perf_api.save_result(result_tcp_rr)
+ perf_api.save_result(result_tcp_rr, official_result)
# prepare PerfRepo result for tcp_crr
result_tcp_crr = perf_api.new_result("tcp_crr_id",
@@ -175,7 +176,7 @@ for size in ["1K,1K", "5K,5K", "7K,7K", "10K,10K", "12K,12K"]:
netperf_result_template(result_tcp_crr, tcp_crr_res_data, test_type="RR")
result_tcp_crr.set_comment(pr_comment)
- perf_api.save_result(result_tcp_crr)
+ perf_api.save_result(result_tcp_crr, official_result)
srv_proc.intr()
diff --git a/recipes/regression_tests/phase3/short_lived_connections.xml b/recipes/regression_tests/phase3/short_lived_connections.xml
index e879a6f..9bb43da 100644
--- a/recipes/regression_tests/phase3/short_lived_connections.xml
+++ b/recipes/regression_tests/phase3/short_lived_connections.xml
@@ -12,6 +12,7 @@
<alias name="mapping_file" value="short_lived_connections.mapping" />
<alias name="net" value="192.168.101" />
<alias name="driver" value="ixgbe" />
+ <alias name="official_result" value="no" />
<alias name="adaptive_coalescing_off" value="no"/>
</define>
<network>
--
2.14.4
5 years, 1 month
[PATCH] recipes: short_lived_connections: add coalescing tunable
by Jan Tluka
The test now takes alias adaptive_coalescing_off. If set to yes, the adaptive
coalescing will be turned off on the test devices while performing the tests.
Signed-off-by: Jan Tluka <jtluka(a)redhat.com>
---
.../phase3/short_lived_connections.py | 21 +++++++++++++++++++++
.../phase3/short_lived_connections.xml | 1 +
2 files changed, 22 insertions(+)
diff --git a/recipes/regression_tests/phase3/short_lived_connections.py b/recipes/regression_tests/phase3/short_lived_connections.py
index 6ed38c6..68ecfca 100644
--- a/recipes/regression_tests/phase3/short_lived_connections.py
+++ b/recipes/regression_tests/phase3/short_lived_connections.py
@@ -1,3 +1,4 @@
+from lnst.Common.Utils import bool_it
from lnst.Controller.Task import ctl
from lnst.Controller.PerfRepoUtils import netperf_baseline_template
from lnst.Controller.PerfRepoUtils import netperf_result_template
@@ -37,6 +38,7 @@ nperf_num_parallel = int(ctl.get_alias("nperf_num_parallel"))
nperf_debug = ctl.get_alias("nperf_debug")
nperf_max_dev = ctl.get_alias("nperf_max_dev")
pr_user_comment = ctl.get_alias("perfrepo_comment")
+adaptive_coalescing_off = bool_it(ctl.get_alias("adaptive_coalescing_off"))
m1_testiface = m1.get_interface("testiface")
m2_testiface = m2.get_interface("testiface")
@@ -46,6 +48,20 @@ m2_testiface.set_mtu(mtu)
pr_comment = generate_perfrepo_comment([m1, m2], pr_user_comment)
+if adaptive_coalescing_off:
+ coalesce_status = ctl.get_module('Custom')
+
+ for d in [ m1_testiface, m2_testiface ]:
+ # disable any interrupt coalescing settings
+ cdata = d.save_coalesce()
+ cdata['use_adaptive_tx_coalesce'] = 0
+ cdata['use_adaptive_rx_coalesce'] = 0
+ if not d.set_coalesce(cdata):
+ coalesce_status.set_options({'fail': True,
+ 'msg': "Failed to set coalesce options"\
+ " on device %s" % d.get_devname()})
+ d.get_host().run(coalesce_status)
+
if netdev_cpupin:
m1.run("service irqbalance stop")
m2.run("service irqbalance stop")
@@ -166,3 +182,8 @@ for size in ["1K,1K", "5K,5K", "7K,7K", "10K,10K", "12K,12K"]:
if netdev_cpupin:
m1.run("service irqbalance start")
m2.run("service irqbalance start")
+
+if adaptive_coalescing_off:
+ for d in [ m1_testiface, m2_testiface ]:
+ # restore any interrupt coalescing settings
+ d.restore_coalesce()
diff --git a/recipes/regression_tests/phase3/short_lived_connections.xml b/recipes/regression_tests/phase3/short_lived_connections.xml
index 289205e..e879a6f 100644
--- a/recipes/regression_tests/phase3/short_lived_connections.xml
+++ b/recipes/regression_tests/phase3/short_lived_connections.xml
@@ -12,6 +12,7 @@
<alias name="mapping_file" value="short_lived_connections.mapping" />
<alias name="net" value="192.168.101" />
<alias name="driver" value="ixgbe" />
+ <alias name="adaptive_coalescing_off" value="no"/>
</define>
<network>
<host id="machine1">
--
2.14.4
5 years, 1 month
转发:如何更好的车间管理
by ������
..Message-ID: <22257222572225722257222572225722257222572225722257>
..From: lnst-developers = <lnst-developers(a)lists.fedorahosted.org>
..To: <qtkzhua(a)rongtien.com> ������������
��..��.��.��:2018-08-21 14:29:30
5 years, 1 month
代开发票13883169292
by xnif@xnif262.com
你好
有正规住宿费,餐费,广告费,建筑材料,工程,技术服务费等各行业发票可以开
价格优惠,有需要请找:13883169292 郝立恒 (加微信同号)
5 years, 1 month
代开本地正规发票
by tersdfgjedh@jyt.com
你好
有正规住宿费,餐费,广告费,建筑材料,工程,技术服务费等各行业发票可以开
价格优惠,有需要请找:13883169292 郝立恒 (加微信同号)
5 years, 2 months