NAT traversal
by Renato Figueiredo
It's great to see this project getting off the ground!
Our group is very interested in this idea. We have been working with
wide-area VM appliance Condor pools for a couple of years and have developed
a peer-to-peer virtual network (IP-over-P2P, or IPOP) that supports
decentralized NAT traversal with techniques including hole-punching and
proxying. We've found this to be very useful to facilitate the deployment of
ad-hoc wide-area Condor pools/flocks where nodes are increasingly behind
NATs. The IPOP code is also open source and user level (though it currently
uses a tap device), so I thought this would be of interest to the list. Here
are some pointers for more information (and software):
http://grid-appliance.org
http://ipop-project.org
We're starting a collaboration with the Condor group on a particular
application of this virtual infrastructure for computer architecture
simulation (http://archer-project.org); hopefully sharing our experience
with this system can benefit nightlife and vice-versa.
Bests,
--rf
--
Dr. Renato J. Figueiredo
Associate Professor
ACIS Lab / Electrical and Computer Engineering
University of Florida
http://byron.acis.ufl.edu
ph: 352-392-6430
15 years, 5 months
Nightlife - Power Grid Simulations, Fusion Reactor
by Brown, Henry, DoIT
fedora-nightlife-list(a)redhat.com <mailto:fedora-nightlife-list@redhat.com> ,
Power grids are now modelled and monitored for energy utilization with EMS/Scada systems.
http://www.sgi.com/pdfs/2320.pdf
Unfortunately they are expensive.
A public domain modelling tool for power grids is modelica:
http://www.modelica.org/
http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel5/10559/33412/0158338...
However a public domain EMS/Scada to control power grids is not yet available.
http://www.hitachi.com/rev/1998/revoct98/r4_110.pdf
A grid computing architecture may make EMS/Scada simulations more widely available.
http://www.ida.liu.se/~kajny/papers/gps-sims2004-kajny.pdf
Nightlife may be able to pioneer a public domain tool for power grid monitoring/modelling.
By using modelica to build grid models locally a decentralized power grid may be possible.
Using these tools might allow displays of power grids on sandtables, etc. for public use.
http://www.sandtable.org/
Public displays of electric utilization and CO2 production may educate customers on electric demand and global warming.
A Nightlife power grid simulator may allow utilities and power coops to compete using new technology (renewables, intelligent metering, etc.) Linux and server farms exasperate this problem by creating electric demand for cooling of server farms for web services(Google, Microsoft, etc.).
http://www.thegreengrid.org/home
Could Nightlife be used to model power grids during peak load periods?
Plugin electric vehicles will charge at night and could compete with air conditioners.
This would cause coal burning plants to run at full capacity.
Without adequate modelling more coal plants will be built.
The power grid in CA, CO, AZ and NM are centralized around coal.
http://www.cpluhna.nau.edu/Change/power_generation.htm
http://www.bizjournals.com/albuquerque/stories/2003/04/07/story2.html
Centralized power grids are based on fossil fuel economics. Power grids were created to distribute power over long distances to share costs, improve reliability, and reduce energy costs.
Fossil fuels are burned at great distance from cities and high power transmission lines move the power losing 10-20% in resistance.
Utility grids have largely been built by local utilities in pieces. Utilities grids interconnect to meet FERC guidelines for reliability.
Centralized power has many problems. Recent power corruption scandals in 2001 California power market caused Governor Gray Davis to be replaced. SW Electric utility cartels inflated prices by shutting down plants on hot days.
http://scratch.mit.edu/projects/RangerRick/11273
Another scenario is the decentralization of power production using the pre-existing gas turbine model. These peak load power plants can deliver power locally to areas requiring power. Gradual replacement of large power grids with small local grids running clean fusion systems may be an example. Other alternative solar or wind power would need to compete for these local markets. http://scratch.mit.edu/projects/GeneMachine/34906
Recent articles have ignored power grid management problems.
This game shows how power grids work. Used for students.
http://www.amazon.com/Power-Grid-Rio-Grande-Games/dp/B0007YDBLE
Game Description
The object of the game is to supply the most cities with power. Players must acquire the raw materials, like coal, oil, garbage, uranium, fusion to power their plants (except for the highly valuable renewable energy wind/solar plants), making it a constant struggle to upgrade your plants for maximum efficiency while still retaining enough wealth to quickly expand your network to get the cheapest routes
Grid Security
Decentralized power grids would be more difficult to destabilize. Redundancy and reliability would be increased with local power production. Efficiency and a reduction in carbon dioxide would improve customer public relations for utilities. This could provide US with a stable, clean and secure power grid.
Biggest Power Grid Problem
Right of way problems in many power grids have forced utilities to decentralize systems. Very few new power lines will be built in the US due to these legal problems. Example: In Santa Fe NM the local utility (PNM) has placed a gas turbine on Richards Ave. to balance loads in the Santa Fe. PNM had wanted to gain rights of way on the rail trail to build another high voltage line to sell coal plant power. Due to local resistance PNM placed a gas turbine on Richards Ave.
A Utility Power Control center is used to manage a multistate power grid.
Power operators monitor changing power requirements and adjust power plants accordingly.
A series of smaller power grids in cities powered locally by fusion power plants could eventually replace the larger grids.
The fossil cost model would be replaced by a fusion cost model.
A series of distributed small fusion reactors could come online as peak load was required.
Utility control rooms would be replaced by automated small grid systems balancing loads locally.
http://scratch.mit.edu/projects/GeneMachine/77493 <http://scratch.mit.edu/projects/GeneMachine/77493>
Modelling Fusion reactions with Nightlife
The rate limiting bottleneck in fusion reactor energy break even relates to stability in magnetic fusion reactors. Plasma interactions may be anticipated by using models of magnetic flux in Polywell reactors.
http://www.talk-polywell.org/bb/viewtopic.php?t=203&highlight=
Fusion reactors use magnets to contain and control plasmas.
These magnets must be monitored and managed in their own small scale power grid.
Power inflow and outflow must be monitored and managed to reach electric power breakeven.
A Modelica Fusion power simulation running on Nightlife may be an example.
Decentralized power may be realized through small scale EMS/Scada systems running on low cost distributed Linux systems. Could Nightlife be an example?
References:
SCADA references:
http://en.wikipedia.org/wiki/SCADA
http://spectrum.ieee.org/jan05/2407 <http://spectrum.ieee.org/jan05/2407>
http://science-community.sciam.com/thread.jspa?threadID=300005637&message...
Polywell Fusion references:
http://www.emc2fusion.org/
http://en.wikipedia.org/wiki/Polywell http://www.santafenewmexican.com/SantaFeNorthernNM/Robert_Bussard__1928_2...
http://www.talk-polywell.org/bb/
Virtual Polywell Simulation
http://www.mare.ee/indrek/ephi/
http://www.talk-polywell.org/bb/viewtopic.php?t=203&highlight=
Henry Brown
henry.brown(a)state.nm.us
cell 795-3680
office 505 827-2509
Confidentiality Notice: This e-mail, including all attachments is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited unless specifically provided under the New Mexico Inspection of Public Records Act. If you are not the intended recipient, please contact the sender and destroy all copies of this message. -- This email has been scanned by the Sybari - Antigen Email System.
15 years, 6 months
here's an example of NightLife's utility....
by Keith Laidig
In response to Jeff Spaleta's recent query about the sorts of
projects that could make use
of NightLife, I'd like to pose the research in our group as an example
of the type of work
that could readily use such a resource.
Short version:
Our public protein structure prediction server, Robetta, relies upon
farming our computationally
intensive steps to NCSA's clusters, via CONDOR, to provide timely
access to our group's methods
for the general academic community. But we find that our resources
are stretched thin and at times
we are unable to provide the researchers the sort of quick response
that allows their research efforts
to proceed.
If we had access to more computing power, even that available from
modest periods of inactivity,
we could put that power to work to address many pressing issues in bio-
medical research such
as HIV/AIDS vaccine design, improvement of existing drugs and/or
design new drugs, and creation
of new methods to harness biology to address issues such as carbon
sequestration.
Overly-long version:
I run the various computing infrastructures for David Baker's
computational biophysics
group at the University of Washington, http://www.bakerlab.org. The
group's primary computational
focus is the de-novo prediction of the 3-D structure of proteins from
the linear sequence of amino
acid in the given protein chain. The algorithm under constant
development here, Rosetta, is an
embarrassingly parallel, Monte Carlo application that requires
significant amounts of CPU time
to discover the "best" protein structures in a statistically
significant fashion. And this approach
has enjoyed modest success over the past few years.
The group's success has led to a broad interest in the availability
of the methods to the
academic community. The code (which is freely available to academic
researchers) is challenging
to use correctly and the post-production data crunching can be
daunting. As a result we created a
publicly available, automatic service - robetta (http://
www.robetta.org) - roughly 4-5 years ago
to allow anyone to use the methods via the service. We've been
victims of our popularity and
the server was soon awash in work that pushed the wait times from a
day or two to almost a year.
To gain more horsepower, we began a collaboration with NCSA and the
CONDOR group to farm
our work to their systems - via CONDOR - and that has proven quite
successful at keeping the wait times
down to the range of "months".
I'd specifically like to point out that the CONDOR group has been VERY
helpful with our CONDOR issues - their goal is your successful use of
CONDOR and they're good at
it! We've been using CONDOR on our local infrastructure for ~8 years
and are quite happy. The
transition to CONDOR wasn't as challenging for the scientists as I
feared and it's integration into Globus
make using remote resources straightforward.
We have researchers from a wide variety of fields who use this
service as an integral portion of their
research effort, despite the somewhat slow turn-around time. But this
summer is the bi-annual, world-wide
CASP contest (http://www.predictioncenter.org/casp8/, blind testing of
methods, lasts May->August) and,
by popular demand, the automatic service is turned towards the many
"targets" of the contest which
some use as starting points for their work. This leaves many
researchers waiting for their work to be
addressed until the contest is completed.
If we could access more computing power we would be able to keep the
service working on the
non-CASP related work during these contests and improve turn-around
time in general.
-KEL
--
+> Keith E. Laidig, Ph.D., M.S., M.Phil. laidig(a)u.washington.edu
+> HHMI Affiliate http://staff.washington.edu/laidig
15 years, 6 months
HPC grid software stack
by Greg Newby
I'd like to mention a Grid computing software stack that's available &
already reasonably integrated. I'll include a few other comments on
HPC software stack adoption for researchers.
It's "Genesis II"
http://www.cs.virginia.edu/~vcgr/wiki/index.php/Genesis_II_Installation_G...
This has a mixture of licenses since the applications have different
sources. I don't know whether any of the included applications are
ineligible for FC inclusion based on their license. This is a fairly
low-level set of packages for things like job scheduling.
I saw some mention of Condor as a basis for an FC HPC stack. This is
not a bad idea at all, and builds on a functional code base.
A standards-based approach is taken by the Genesis II package [within
the OGF, which is not an official standards organization like IETF &
ISO, but is taking a community standards approach to grid computing &
related technologies]. Genesis II incorporates various working &
standards-compliant pieces into a partial HPC software stack.
On the theme of 'what will researchers use,' I had some discussion
with Jeff about this. Getting researchers to move from their
proprietary packages can be challenging. I think a different but
highly relevant challenge is the need for an integrated & reasonably
well-supported HPC software stack. Researchers would like to get
hardware & an HPC stack from one location. Or, at least, get an HPC
stack that is fully integrated & supported, not a bunch of independent
pieces that they need to stich togeter. That's a nitch I think
Nightlife has great potential to address.
-- Greg
Dr. Gregory Newby, Chief Scientist of the Arctic Region Supercomputing Center
Univ of Alaska Fairbanks-909 Koyukuk Dr-PO Box 756020-Fairbanks-AK 99775-6020
e: newby AT arsc.edu v: 907-450-8663 f: 907-450-8603 w: www.arsc.edu/~newby
15 years, 6 months