On Fri, Feb 05, 2021 at 02:18:29PM +0100, Jan Tluka wrote:
This patchset refactors IperfFlowMeasurement and
IperfFlowMeasurementGenerator classes to allow parallel iperf testing.
The current parallel implementation that can be achieved by specifying
the perf_parallel_streams recipe parameter works correctly however the
limitation is that there's only one iperf process that creates multiple
connections and that process (and all the connections) can be handled by
a single CPU at the same time. In our internal testing this proved to
report very variable CPU utilization numbers.
This patchset extends the IperfFlowMeasurementGenerator with additional
recipe parameters, the perf_parallel_processes and
perf_parallel_processes_cpus, to support paralell iperf testing.
The patch set includes also update of DevInterruptHWConfigMixin that is
required for this test scenario to provide reproducible results.
Jan Tluka (8):
Perf.Measurements.BaseFlowMeasurement.NetworkFlowTest: change flow to
contain a list of server/client jobs
Perf.Measurements.IperfFlowMeasurement: adapt to changes of
NetworkFlowTest
TRexFlowMeasurement: adapt to changes of NetworkFlowTest
Recipes.ENRT.MeasurementGenerators.IperfMeasurementGenerator: add
perf_parallel_processes parameter
Perf.Measurements.IperfFlowMeasurement: use parallel_perf_processes
parameter
Recipes.ENRT.MeasurementGenerators.IperfMeasurementGenerator: add
parallel_perf_processes_cpus parameter
Perf.Measurements.IperfFlowMeasurement: use parallel_processes_cpus
parameter
Recipes.ENRT.ConfigMixins.DevInterruptHWConfigMixin: change
dev_intr_cpu to dev_intr_cpus
.../Perf/Measurements/BaseFlowMeasurement.py | 28 +++++--
.../Perf/Measurements/IperfFlowMeasurement.py | 82 +++++++++++++------
.../Perf/Measurements/TRexFlowMeasurement.py | 31 ++++---
.../ConfigMixins/DevInterruptHWConfigMixin.py | 38 +++++----
.../IperfMeasurementGenerator.py | 19 +++++
5 files changed, 138 insertions(+), 60 deletions(-)
--
2.26.2
_______________________________________________
LNST-developers mailing list -- lnst-developers(a)lists.fedorahosted.org
To unsubscribe send an email to lnst-developers-leave(a)lists.fedorahosted.org
Fedora Code of Conduct:
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines:
https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives:
https://lists.fedorahosted.org/archives/list/lnst-developers@lists.fedora...
we've discussed this over a video conference meeting, we agreed on
investigating some additional ideas, only including the main points here
instead of the full discussion as the email would be too long...
* instead of creating multiple server/client jobs per NetworkFlowTest,
generate multiple parallel Flows to test (each translating into a
NetworkFlowTest with single server+client).
* differentiate the Parallel flows by adding a source/destination port
(completing the traditional 5-tuple that identifies a flow)
* extend the recipe parameter for cpu pinning to allow for a list of cpu
cores and potentially add a parameter that defines the policy on how
these cpus are used during generation of Flow combinations - if a
single cpu is specified and a single Flow is generated, functionality
stays as it is now. If multiple cpus are provided and multiple flows
are generated we can for example "round-robin" assign cpus to each
flow
* extend the "cpupin" property of a Flow to accept a list of integers
(for multiple cpus that can be used for the specific flow), this would
add the possibility of an additional policy for the previous point -
run all flows on all provided cpus
-Ondrej