I'm currently working on a system to run rpminspect on every build coming out of koji but will likely branch out and replace taskotron in the not-so-distant future. As part of this, I have some questions about the gating system and the other ci systems which feed it.
How does the gating system handle builds with missing results? As much as I'd like to say that the new system will be perfect, I know better and I hope that other folks do as well. When there are internal issues which lead to missing results that the gating system is looking for, how is this handled?
Is there anything that is looking for these missing results currently or is it only addressed if/when brought up by packagers?
How are requests for re-running tests handled?
For the existing pipelines, how are test triggers cached or jobs re-scheduled when there's downtime for the testing system?
Thanks,
Tim
On Mon, Jul 15, 2019 at 10:03:04AM -0600, Tim Flink wrote:
I'm currently working on a system to run rpminspect on every build coming out of koji but will likely branch out and replace taskotron in the not-so-distant future. As part of this, I have some questions about the gating system and the other ci systems which feed it.
How does the gating system handle builds with missing results? As much as I'd like to say that the new system will be perfect, I know better and I hope that other folks do as well. When there are internal issues which lead to missing results that the gating system is looking for, how is this handled?
Missing results will block the update from going through.
Is there anything that is looking for these missing results currently or is it only addressed if/when brought up by packagers?
Nothing looks for them actively, but having a cron job that reports updates that were created more than say 6h ago and are still pending their CI status should be fairly easy.
How are requests for re-running tests handled?
I've started a private thread on this question. The current suggestion is to have certain key words added in a comment re-trigger the tests, but I have had no response which key words nor which content is required for the pipeline to trigger.
Pierre
On Tue, Jul 16, 2019 at 9:55 AM Pierre-Yves Chibon pingou@pingoured.fr wrote:
How are requests for re-running tests handled?
I've started a private thread on this question. The current suggestion is to have certain key words added in a comment re-trigger the tests, but I have had no response which key words nor which content is required for the pipeline to trigger.
I believe Tim was talking about tests triggered by Koji builds, i.e. there's no comment field anywhere.