The RITlug eboard announced the semesterly FOSS Family Dinner below:
Please fill the form linked above so we know about how many people to
expect and we don't overwhelm the restaurant.
Hope to see you next Thursday! More details in the link above.
Justin W. Flory
Hi folks, I wanted to extend this invitation to the FOSS mailing list.
Come join me on Monday at 1pm to learn about containers and
supercomputers! Details below.
-------- Forwarded Message --------
Subject: Scientific Computing Group Meeting on Monday 11/18: Talk by
Date: Fri, 15 Nov 2019 16:41:27 -0500
From: Daniel Wysocki
Our next Scientific Computing Group meeting is scheduled for this
Monday, Nov 18th, at the usual time (1pm) and place (Orange Hall 1350).
The talk is by RIT's Justin Flory, a 5th year undergraduate student in
Networking and Systems Administration. He recently worked on the IT end
of a supercomputing facility, and researched the performance of
different "containers", which he will be talking about here. For those
who don't know, containers are a way of packaging up an already
configured computing environment, in a way that makes it portable and
reproducible. They're becoming heavily used in scientific computing,
both because scientists like reproducibility, and because it means you
can avoid the headaches of installing complicated programs, and just
grab a container that's already been set up for you.
Talk details are below, hope to see you there!
*Title:* Docker containers + supercomputers = ??
How do you deliver research software into research computing
environments? Or supercomputers? Docker containers and related products
are revolutionizing IT infrastructure, but where do containers belong in
supercomputing / High-Performance Computing (HPC) infrastructure, on
large distributed computing grids? In a world of proprietary hardware
and drivers, large-scale distributed systems, and emphasis on bare-metal
performance, are containers another virtualization fad to skip over
supercomputing? Guess again.
This session explores different container run-times built for
supercomputing / HPC environments, what they offer, and some
benefits/costs of implementing them alongside HPC job scheduling
software. This session will be useful for you if you work with HPC
infrastructure, you write code intended to be run on parallel systems,
or you are interested for what the future of HPC technology might
Hey all, I've heard that we are in possession of a large amount of OLPCs.
I've been told about the project to get a modern distro working on them
(such as https://github.com/sugarlabs/sugar-live-build) and I'd like to get
involved. First and foremost, I've heard we are in possession of several
broken units, and I'd like to take a look at them. Anybody know the
whereabouts of these units?