On Mon, 2024-09-30 at 09:24 -0700, Adam Williamson wrote:
On Mon, 2024-09-30 at 11:18 -0400, Daniel Walsh wrote:
RamaLama is an open source competitor to Ollama. The goal is to make the use of AI Models as simple as Podman or Docker. But able to support any AI Model registry. HuggingFace, Ollama as well as OCI Registries (quay.io, docker hug, artifactory ...)
It uses either Podman or Docker under the hood to run your AI Models in containers, but can also run containers natively on the host.
We are looking for contributors in any form, but really could use some help getting it packaged for Fedora, PyPy and Brew for Macs.
We have setup a discord room for discussions on RamaLama. https://t.co/wdJ2KWJ9de https://t.co/wdJ2KWJ9de
The code is all written in Python.
Join the initiative to make running Open Source AI Models simple and boring.
Having a quick look at it...I assume for packaging purposes we should avoid that yoiks-inducing `install.py` like the plague? Is the setup.py file sufficient to install it properly in a normal way? On the face of it it doesn't look like it would be, but maybe I'm missing something. Given that we're in the 2020s, why doesn't it have a pyproject.toml ?
Thanks!
Erf...and then ramalama.py goes to the trouble of adding the non- standard directory the Python lib was installed to the path before importing it:
https://github.com/containers/ramalama/blob/main/ramalama.py#L10-L15
why all this? Why not just have it set up as a perfectly normal Python lib+script project and let all the infrastructure Python-world has been building for decades handle installing it on various platforms? Is there something I'm missing here, or should I send a PR?
Is it because this was written for macOS originally? But surely there's a standard way to install a normal Python project on macOS that doesn't require a custom install script?!