OT: Cloud Computing is coming to ...
jd1008 at gmail.com
Wed Jul 21 14:46:47 UTC 2010
On 07/21/2010 12:55 AM, Les wrote:
> On Tue, 2010-07-20 at 20:48 -0600, Christopher A. Williams wrote:
>> On Wed, 2010-07-21 at 03:00 +0100, Marko Vojinovic wrote:
>>> On Tuesday, July 20, 2010 23:18:11 Phil Meyer wrote:
>>>> That is the whole point. Ideally this is how it works from a ptactical
>>>> point of view:
>>>> I am the Dean of Engineering and we need to run a massive simulation of
>>>> a type 5 tornado on a set of bridge types. This is not only for
>>>> instruction is a senior level class, but as our main graduate focus for
>>>> the next year.
>>>> We assess our own needs, and access the University 'cloud' website. I
>>>> order 256 2.6 GHZ or better CPUs, 192GB of RAM, and 2TB of disk space,
>>>> and specify Linux as my OS.
>>>> Thirty minutes later I get an email notification with the hostname, IP
>>>> Address, administrator login, and password for my new compute environment.
>>> Nope. Thirty minutes later you will get an email saying that the simulation
>>> from the Dean of Engineering is using up vast amount of resources, and that
>>> there is nowhere nearly enough RAM for what you want.
>> Nope - That's not true. Resource sharing in a cloud environment doesn't
>> work that way unless you're really bad at managing things. If you are,
>> go back to the 101 level course on managing cloud infrastructure. This
>> is the most basic of operations management for cloud.
>>> Then you get pissed off, call the Engineering Dean on the phone, and he tells
>>> you that his department invested more money for the cloud infrastructure than
>>> you, and that his simulation is more important than your database, from the
>>> scientific point of view. Then you call University Dean and through him order
>>> the cloud maintainers to reduce the resources for the simulation in order to
>>> accommodate for your database.
>>> Dean of Engineering then gets pissed off, and decides to withdraw his share of
>>> money and build his own cloud, to be used only by the Engineering department.
>>> Soon enough other departments do the same and you end up having a dozen of
>>> clouds on the University, and they don't interoperate since people are fighting
>>> over resources and who invested more money and whose work is more important.
>>> This will of course defeat the purpose of having a cloud in the first place,
>>> since every department is going to invest into their own equipment, and then
>>> have it idling or thrown away after their projects are over.
>>> And actually, all this has already happened. I've seen it on a couple of
>>> Universities. It's just that they were "sharing" (ie. fighting over) *cluster*
>>> resources, not *cloud* resources. But that's just terminology difference.
>> No - It's not just a terminology difference. There truly is a difference
>> between clusters and cloud environments. In cloud environments, You can
>> absolutely guarantee resource availability (CPU, RAM, Disk, and Network
>> resources) to a designated groups of systems, and you can dynamically
>> scale the environment to efficiently meet the compute needs of all
>> parties. If anything, it makes capacity planning much simpler.
>> Traditional clusters simply do not, and cannot, do this.
>>> When people invest their money into something, they want to be "in charge" of
>>> it. And if they are supposed to share it with others, there is bound to be a
>>> lot of friction.
>>> I agree that this cloud environment can be useful in a commercial company
>>> where there is only one "money bag". But in University environment, somehow I
>> I've faced this issue in more client engagements than I care to count.
>> It's invariably a red herring. The vast majority of people really just
>> want reasonable guarantees that they will actually have their
>> *realistic* expectations met. Maintaining control over one's own little
>> IT fiefdom for the sake of ego maintenance is something even commercial
>> organizations can not afford these days. Let's not even start with
>> In my professional life, I deal with exactly these kinds of issues
>> constantly. While most folks just want to understand how things work,
>> the truly dogmatic objectors turn out with impressive consistency to all
>> be basically ego driven control freaks, who are also not very adept at
>> this kind of technology - let alone IT infrastructure.
>> One short, but true story: Someone who fit this dogmatic category well
>> (his own coworkers would joke about him) once asked me about how much
>> storage we were allocating VMs. I said, "On the average we plan for 50GB
>> of disk per VM," - an industry norm.
>> Half-chuckling, he said, "Well, that's not enough for us."
>> "Really?" I replied. "How much disk do you need?"
>> Leaning back in his chair, he smugly answered, "We usually deal with
>> storage in a minimum of half-terabyte increments."
>> "Oh." I said. "I see. ...and how much of that half-terabyte are you
>> currently using?"
>> Someone from the other end of the table snickered (I guess they couldn't
>> help it) and said, "...About 20GB."
>> ...That pretty much ended the meeting. I thought to myself, so in
>> reality, this guy *wastes* about half a terabyte of disk at a time.
>> Over the life of that phase of the project, he never came close to using
>> 200GB (although he demanded much more repeatedly - he just couldn't
>> justify it), and when his disk usage spiked, people jokingly questioned
>> what part of his personal MP3 collection he was keeping out there - at
>> which point his storage usage started to go down.
>> We just made sure he always had the compute resources he really needed,
>> and also made sure that he was efficiently using what was given to him.
>> Think of it as a "Eat all of your food, children, or no dessert!"
> I like my systems to be local. I program them, I explore them, I
> sometimes hack on them with software, hardware, or a combination. I
> occasionally take one of the off line and use it for a program dump, or
> just to mess with ethernet stuff without impacting my network.
> I have private files on my system, and lots of works in progress. I
> sometimes do customer work on my systems (if they permit it), and the
> data files, simulation files, pattern files (I test SoC devices) can
> consume up to 20G/device. They don't compress well, due to the size and
> variety of the data, and I often have device data for 10-12 devices on
> line at a time. I do not have that all the time, it is "burst work",
> and consumes trememdous amounts of disk space sometimes for hours,
> sometimes for days and in a few cases weeks at a time.
> A 500G disk is about $200, lasts me an average of 3-5 years, with no
> other costs. The backup is a similar disk, via a plugin usb, firewire,
> or sometimes mounted. Total cost for supporting 5 years data, $400. A
> cloud system where I use that much bandwidth, and storage runs about
> 3500 for the equivalent usage, and some of the things I do would not be
> permitted by the cloud management for fear I would mess things up, and I
> occasionally do (have you written bugfree programs of any significant
> Moreover you pointed out one of the real issues: month to month rental
> or lease or whatever you want to call it. And that is not counting the
> connection costs, storage premium if you are a non-standard user, or the
> lack of control, or the subject to search of the on line data because it
> falls under different jurisdictions. In addition, the security of
> encryption on a server system must by design be reduced in class for any
> given equivalent algorithm on a private system, due to the available
> resources to hack at it. IT folks love the idea of more control, less
> diversity in program support and all the other control they can
> exercise. The guys who make a real difference in IP are cost out of the
> equation, because they don't fit the parameter of the "average user".
> In addition the down time becomes universal instead of private. In a
> time when we are threatened by terrorists, putting your whole
> organizations software and data in a big "basket in the cloud" seems
> like a recipe for disaster. And it is a disaster that would make the
> financial melt down look tame. A single EMP weapon could disable or
> destroy multiple companies in a signal region. A domino economic effect
> that could have catastrophic implications.
> I watched a very good company have a big breakdown when their old server
> system with dumb terminals went down. The costs, and impacts nearly put
> them out of business. If they had been smaller it would have.
> I have no doubt that companies will embrace the cloud. At least until
> it all comes crashing down around their ears. PC's became dominant
> precisely because centralized solutions were an inhibiting factor on
> business, and personal schedules. Some lessons are too soon forgotten.
> Les H
You forgot to mention the large management bureaucracy they will create
around cloud systems which will cost 10 to 100 times the cost of the
machines. I do not call that smart use or efficient use of money.
More information about the users