On 3/13/19 2:55 PM, Ken Dreyer wrote:
I have some ideas about Koji development. I didn't want to throw a
bunch of ideas up in the air without any code, but at the same time I
did want to at least get the topics out there.
Please let me know what you think!
I've got a few ideas in line.
== API alternative to XML-RPC ==
From time to time I hear complaints about the XML parts of Koji. It's
true that this is showing its age, but XML-RPC is a pretty mature
solution with broad client support in a lot of languages that matter.
Nevertheless I sometimes hear REST offered as a solution. I've worked
with a couple services that added a REST API in addition to the
original XML-RPC API, and unfortunately one of the biggest barriers to
completely transitioning is all the dependencies. Koji's ecosystem is
growing more and more as Koji's architecture becomes more modular and
pluggable, and REST would "break the world". In some of these
projects' cases that tried to transition, I suspect the projects
themselves are going to die before they drop XML-RPC support.
Moreover, there are some things that have no easy analog with an HTTP
- Koji has a "list-api" RPC that automatically provides a list of all
calls the hub provides. This is extremely useful when developing code
and services that interact with Koji. There's nothing simple that
gives us this same functionality out of the box.
- Koji has multi-call support, allowing us to send multiple RPCs over
a single HTTP request. This is critical to operating Koji at scale.
The doing requests serially (or even parallelizing the on the client)
is incredibly slow compared to the performance of multicall
operations. Given Kojihub's single "large box" hub architecture, it's
important to avoid hammering with more requests.
It's the "XML" that's bad in "XML-RPC", and I am wondering if
could be a good solution. I have not played around with it. There is
slow progress towards developing GSSAPI authentication for this at
I've been a big fan of AMQP for a while.
In theory there could be an 'list-api' queue that various
plugins/builders/hubs/kojira/cats listen on and respond with their
specific api information. In theory this could make the
self-documenting elements more expressive.
For very large systems a distributed queue could be a big win.
I'd love a more heavily event driven environment, and I think it would
blend well here too.
== Cheetah -> Jinja ==
Cheetah is essentially dead upstream and there is a lot of support
behind the Jinja2 project.
I could have sworn that I saw some patch from Tomas about this where
he was experimenting converting over, but maybe I am imagining this.
== SQLAlchemy ==
I expect an ORM would help with developer velocity and avoiding SQL
injection in a lot of areas. Koji has its own "history" helper methods
to record an audit trail for some changes in the database.
I've had some good experience on a small project using
to reverse a pre-existing schema into a series of SQLAlchemy model
I think a SQLAlchemy transition could be 1) swap out the psycopg
connection code to use SQLAlchemy connections instead, and pass all
raw SQL into the SQLalchemy connection 2) use sqlacodegen to migrate
to using rich models over time.
== pytest ==
Currently the Koji tests use Python's unittest framework, and pytest
would let us have advanced features and cut out a lot of the boilerplate.
pytest is able to execute unittest's tests, so that would help with
the transition instead of having to cut everything over all at once.
== Dynamic builders ==
If Koji's task queue grows beyond what the static list of builders can
handle, there's no way to "burst" to a cloud environment to
dynamically add and remove builder capacity.
I have been brainstorming some kind of an "orchestrator" that can
create the necessary builder credentials and authorize the builders
into the hub. It would need an ability to add and remove Kerberos
principals for each builder's FQDN, or maybe not?
Maybe this could be implemented as an OpenShift operator.
== Event-driven architecture ==
Currently Koji polls a lot. This puts pressure on the hub to
continuously answer all the poll requests from the CLI, web interface,
kojid, etc. Big environments have to tune the kojid's sleep time to
use longer timeouts, which means kojid picks up new builds slowly.
In other projects celery with rabbitmq has been a great combination
for dispatching jobs to workers. I think celery could be a good choice
for Koji as well.
Having a more heavily event based workflow may also make building
plugins easier. With a message queue that I could open to some external
items (pulp? lorax-composer? fedmesg?) this could get neat around
== Stronger checksums ==
While I was working on content generators, I found Koji relies on md5
in several areas. This hash is very broken and we'll need a new one.
It would be ideal to have a tool that can scan every existing build
archive, calculate the new hash values, and add the new hash values
into the database.
== Longer GPG key lengths ==
Koji currently stores short key IDs. This has ramifications for Pungi,
productmd, and probably lots more, because they all get these key
values from Koji.
website explains the problem with these short key IDs, and I'm
surprised we don't have attacks on Red Hat's keys already in this area.
== Storing builds in object storage (S3) ==
Koji assumes a sizable NFS architecture, and in many environments
object storage like S3 is more attractive and scalable.
There are a couple open-source implementations of S3's API, like Ceph.
Maybe S3 buckets could be another "volume" type for the Koji hub. I
haven't looked in depth at what this would mean for how Koji
manipulates builds (eg with createrepo).
I've wondered for a while about possibly using pulp-project to store the
various build items. Pulp3 will be postgresql. I've not thought of a
good way to correlate build logs with rpms though....
Fermi National Accelerator Laboratory