On 9/30/24 12:24, Adam Williamson wrote:
On Mon, 2024-09-30 at 11:18 -0400, Daniel Walsh wrote:
RamaLama is an open source competitor to Ollama. The goal is to make the use of AI Models as simple as Podman or Docker. But able to support any AI Model registry. HuggingFace, Ollama as well as OCI Registries (quay.io, docker hug, artifactory ...)
It uses either Podman or Docker under the hood to run your AI Models in containers, but can also run containers natively on the host.
We are looking for contributors in any form, but really could use some help getting it packaged for Fedora, PyPy and Brew for Macs.
We have setup a discord room for discussions on RamaLama. https://t.co/wdJ2KWJ9de https://t.co/wdJ2KWJ9de
The code is all written in Python.
Join the initiative to make running Open Source AI Models simple and boring.
Having a quick look at it...I assume for packaging purposes we should avoid that yoiks-inducing `install.py` like the plague? Is the setup.py file sufficient to install it properly in a normal way? On the face of it it doesn't look like it would be, but maybe I'm missing something. Given that we're in the 2020s, why doesn't it have a pyproject.toml ?
Thanks!
install.py is just for installing from a web site.
make install
Should do everything required for setting up the install. The current rpm/python-ramalama.spec
uses a combination of pyproject_install and make install to put files in place.
%install %pyproject_install %pyproject_save_files %{pypi_name} %{__make} DESTDIR=%{buildroot} PREFIX=%{_prefix} install-shortnames %{__make} DESTDIR=%{buildroot} PREFIX=%{_prefix} install-docs %{__make} DESTDIR=%{buildroot} PREFIX=%{_prefix} install-completions
Not sure if you can do everything in python setup.py files. Shortnames, man pages (install-docs) and command completion files are handled separately.
pyproject.toml is new to me.
Lokesh worked on some of this with limited time.