OT: Cloud Computing is coming to ...

Christopher A. Williams chriswfedora at cawllc.com
Fri Jul 23 14:54:19 UTC 2010


On Thu, 2010-07-22 at 23:59 -0700, JD wrote:
> On 07/22/2010 10:40 PM, Christopher A. Williams wrote:
> > On Thu, 2010-07-22 at 23:31 -0600, Christopher A. Williams wrote:
> >> On Thu, 2010-07-22 at 22:14 -0700, JD wrote:
> >>>> I've personally deployed Tier 1 databases - Oracle RAC in all of its
> >>>> glory - on cloud infrastructure. Our RAC nodes outperformed their
> >>>> physical counterparts in benchmark testing. And they did it with better
> >>>> availability and redundancy built in.
> >>>>
> >>> I would be interested in seeing independently published benchmarking
> >>> results and not in-house benchmarking done by the devs of the installation.
> >> The benchmarking was done by the client, and is being independently
> >> verified. It will be eventually published for public consumption as well
> >> - most likely around the end of August or so. Remind me and I'll send
> >> you a copy.
> > Almost forgot. Have a look at:
> > http://www.vmware.com/solutions/business-critical-apps/exchange/
> >
> > ...It turns out the the current performance record for a little
> > enterprise application called Microsoft Exchange is done on VMware
> > vSphere, which just happens to be the leading platform supporting cloud
> > based infrastructure today.
> >
> > Yes - Exchange runs faster on cloud infrastructure than it does on
> > running directly on physical hardware. I've benchmarked similar using
> > Phoronix benchmark suite. Because of how things can get optimized, such
> > performance increases are actually fairly common.
> >
> Can you describe the hardware of the private cloud
> and the hardware of the non-cloud machine in the
> location where you were involved in the installation
> or configuration?
> I would like to  know what is being compared to what
> when claims of doubling the performance are made.

<snip...>

Good question! The link I gave includes a link to a white paper
describing the hardware environment in detail for the Exchange example.

As to the systems and testing I have done, it has always been on
absolutely identical hardware. Most recently, a typical server
configuration might look something like this:

* Dual-Socket, Quad-Core Nahalem Processors
* 48 GB RAM
* 32GB local disk (for booting either the local OS or the Hypervisor, as
appropriate)
* Dual 10Gb NICs

In a rackmount server, this would be something like the HP Proliant
DL380 G6. In a blade server system, the HP BL490c running in a HP c7000
chassis with Flex10 modules would be a good example. You can find
similar configurations from IBM, Dell, and even Cisco (with their UCS
platform).

Since the newest deployments I have done are all on a converged 10G
Lossless Ethernet network, we had no need for a traditional Fiber
Channel SAN. Our storage protocol was either iSCSI or NFS, depending on
if we wanted to use a block protocol or not. All applications were then
accessed via the SAN.

Note: Doing this for large clients with sizable budgets also means I get
to play with the new toys a lot. :) But I've done similar for my church
using free versions of this stuff and using Fedora with the iSCSI Target
for my SAN. You could also use something like FreeNAS, but I like Fedora
(and this is the Fedora List :) ...). It all works impressively well.
Total acquisition cost for the hypervisor and other supporting software:
$0.00

So, given that scenario, we would load one set of systems to run things
directly on the physical hardware. We then load up our cloud
infrastructure components (usually VMware vSphere, but most any Type 1
hypervisor would work similarly) on an identical number of identical
servers. Then we configure VMs to run on top of our virtual
infrastructure platform, tune both systems (physical and virtual) based
on their respective best practices, and start benchmarking.

In most cases, the performance is similar, with slight edges given to
one or the other. But once you begin to scale, and if there is any kind
of function where memory or disk cache efficiency is a factor, the
virtual systems generally take off and leave their physical counterparts
behind. The same can be said for CPU efficiency if scaling horizontally.

The reason for this is actually pretty simple. The kernel for a general
purpose OS has to do a lot of different things. It's a jack of all
trades and master of relatively few, if any of them. A Type 1 hypervisor
replaces the general purpose OS at the bottom layer, becoming the final
arbiter of who gets scheduled on the CPU when and how, as well as
handling memory allocation, etc. Because they are specialized to
basically do one thing, and 1 thing only, they are much smaller (VMware
ESXi has a 32MB footprint) and much more efficient. Since they offload
these features from the general purpose OS running on top of them and
just are better at doing these things, everything gets to run faster.

Hope that makes sense.

-- 

=========================================
"In theory there is no difference between theory and practice.
In practice there is."

--Yogi Berra



More information about the users mailing list