Many Core processors: Everything You Know (about Parallel Programming) Is Wrong!

David Ungar is "an out-of-the-box thinker who enjoys the challenge of building computer software systems that work like magic and fit a user's mind like a glove.". this is a summary from  SPLASH 2011 in November 2011
In the end of the first decade of the new century, chips such as Tilera’s can give us a glimpse of a future in which manycore microprocessors will become commonplace: every (non-hand-held) computer’s CPU chip will contain 1,000 fairly homogeneous cores. Such a system will not be programmed like the cloud, or even a cluster because communication will be much faster relative to computation. Nor will it be programmed like today’s multicore processors because the illusion of instant memory coherency will have been dispelled by both the physical limitations imposed by the 1,000-way fan-in to the memory system, and the comparatively long physical lengths of the inter- vs. intra-core connections. In the 1980’s we changed our model of computation from static to dynamic, and when this future arrives we will have to change our model of computation yet again.
If we cannot skirt Amdahl’s Law, the last 900 cores will do us no good whatsoever. What does this mean? We cannot afford even tiny amounts of serialization. Locks?! Even lock-free algorithms will not be parallel enough. They rely on instructions that require communication and synchronization between cores’ caches. Just as we learned to embrace languages without static type checking, and with the ability to shoot ourselves in the foot, we will need to embrace a style of programming without any synchronization whatsoever.
In our Renaissance project at IBM, Brussels, and Portland State,   we are investigating what we call “anti-lock,” “race-and-repair,” or “end-to-end nondeterministic” computing. As part of this effort, we have build a Smalltalk system that runs on the 64-core Tilera chip, and have experimented with dynamic languages atop this system. When we give up synchronization, we of necessity give up determinism. There seems to be a fundamental tradeoff between determinism and performance, just as there once seemed to be a tradeoff between static checking and performance.
The obstacle we shall have to overcome, if we are to successfully program manycore systems, is our cherished assumption that we write programs that always get the exactly right answers. This assumption is deeply embedded in how we think about programming. The folks who build web search engines already understand, but for the rest of us, to quote Firesign Theatre: Everything You Know Is Wrong!
 This video, is an interview with David Ungar from IBM Renaissance Project on programming many core computers and non-determinism, is mind boggling and expands beyond the text above.


David Ungar likes anything creative, from poetry to starting a business. The moment we built a 1,000 core process is like discovering a new space .  He also says , Small Talk - originally developed i

Note:  I added on February 7, 2014  the  original talk at Splash-2011  Everything You Know (About Parallel Programming) Is Wrong!: A Wild Screed About the Future 


serendipiddy said…
Does that mean we need remove the edge that computers have over us..? Or does allow them to compete with us in originality and guessing - give computers both human and logical strengths..?
Re non-determinism and avoiding unnecessary serialization, Edsger Dijkstra was right...50 years ago!

Press ON!

Ken 'classmaker' Ritchie
Mako said…
We'll need to get inspiration from organic brain processes since they are essentially manymanymanycore processors. It reminds me of an AI in the book "Society of the Mind". The AI could do all these amazing things like predicting the stock market in almost real time but it wasn't very good at standard number crunching (929 times 14.96 is one example in the book that kind of stumped the AI).
Alex said…
That makes no sense to me. How could you even tell the current market cap of a single stock if you couldn't multiply 929 * 14.96? Let alone predict many stocks. I understand that calculating the 2 millionth digit of pi is a different problem than predicting the stock market tomorrow, but you better damn well be able to do simple multiplication if you wanted to try tackling either problem.
Tim Hawkins said…
Alex, its the precision vs perception tradeoff, a human can look at a mass of data on a graph and "perceive" a result, they are using sampling and parallel processing all the data into an approximation of the answer, each level of summary of the data uses different levels of approximation and different sample mechanisms, humans do that really well, and can arrive at a "gist" of whats going on really fast, but cannot tell you much detail. A computer application can laboriously work its way through all the data and precisely calculate an answer, but it will take time, and when there are silly amounts of data involved the "perception" method is adaptable and can coarsen its sampling and approximations, to match, the precise approach just takes longer and longer.
my-inner-voice said…
@serendipidy @classmaker @mako @Alex @Tim Thanks very much for your comments. I am still in in state of awe, as this entry had an unprecedented 14,305 page reads in 11 hours, and still increases by the minute.
Damien Hocking said…
" The obstacle we shall have to overcome, if we are to successfully program manycore systems, is our cherished assumption that we write programs that always get the exactly right answers. "

Amen. Say goodbye to determinism. I've been trying to explain to pepole why this is important and we need to embrace it. I'm going to point them to this.
Ruby Claire said…
yes, approximation and different sample mechanisms are helpful.

Product questionnaire
Adam said…
This lecture from Gerald Sussman the author of Structure and Interpretation of Computer Programs helped me understand what David Ungar is getting at.

There is some early math in it but just bear with it and I think it will help figuring out where we are likely going...
my-inner-voice said…
@adam You should tweet the link with Professor Sussman, as your comment is very relevant.
my-inner-voice said…

Ly and Sly are Languages for Non-deterministic Programming

Ensembles: collections representing a whole, multi-part entities
e.g. a flock of birds
Facilitate map/reduce-like parallelism
Message adverbs and gerunds control parallelism and semantics
Sly is an extension to Smalltalk-80
Ly: JavaScript-like language implemented in Smalltalk
Ilias Tsagklis said…
Hi there,

Nice blog! Is there an email address I can contact you in private?
anu said…
Very informative article you have got here. I love reading this kind of stuff. This is truly a great read for me. I hope to see more articles from you.Top 10 web hosting companies
Anonymous said…
I have read this post. This post is a nice one that information are useful to me..that am doing web designing company chennai india , website designing company chennai india and web development company chennai india. That I will inform about your post to my friends and all the best for your future posts..
Ron Wolf said…
frankly, i don't get this. you seem somewhat well informed and the guy in the video is seemingly an authority on multi-core and parallel computing, so how can you both be referencing Amdah's law when its been long superseded by Guftason's law?

also, i don't see much new here, shared nothing architectures were conceived 40 years ago and have, indeed, been quite successful and, in their way, create beautiful problem decompositions.

anyway, a thousand CPUs is nothing. let's talk about millions, at least. on the other hand, if you're not google or a physics lab or amazon, then what exactly do you need a huge increase in computation for? ha ha, yes, that 's a joke. sortof.... anyway, in large server farms, the issue now is more watts/GHZ, so if multi-core helps with that (likely) than great.
my-inner-voice said…
Here are the original slides of Dr. Ungar at SPLASH:

Popular Posts