Parallel computing: Distributed systems vs multicore processors?

Go To StackoverFlow.com

1

I was just wondering why there is a need to go through all the trouble of creating distributed systems for massive parallel processing when, we could just create individual machines that support hundreds or thousands of cores/CPUs (or even GPGPUs) per machine?

So basically, why should you do parallel processing over a network of machines when it can rather be done at much lower cost and much more reliably on 1 machine that supports numerous cores?

2012-04-03 23:20
by Jacob Griffin


2

I think it is simply cheaper. Those machines are available today, no need of inventing something new.

Next problem will be in complexity of the motherboard, imagine 10 CPUs on one MB - so much links! And if one of those CPUs dies, it could destroy whole machine..

You can write a program for GPGPU of course, but it is not as easy as write it for CPU. There are many limitations, eg. cache per core is really small if any, you can not communicate between cores (or you can, but it is very costly) etc.

Linking many computers is more stable, more scalable and cheaper due to long usage history.

2012-04-03 23:34
by Petr Újezdský


0

What Petr said. As you add cores to an individual machine, communication overhead increases. If memory is shared between cores then the locking architecture for shared memory, and caching, generates increasingly large overheads.

If you don't have shared memory, then effectively you're working with different machines, even if they're all in the same box.

Hence it's usually better to develop very large scale apps without shared memory. And usually possible as well - although communications overhead is often still large.

Given that this is the case, there's little use for building highly multicore individual machines - though some do exist e.g. nvidia tesla...

2012-10-16 11:24
by Sideshow Bob
Ads