Selected comments on Many Core processors post

These are selected quotes from the comments to the post Many Core processors: Everything You Know (about Parallel Programming) Is Wrong from this blog, published on Hacker News.

All comments are outstanding, my apologies not including everyone here, please follow the Hacker News link to read them all
"The obstacle we shall have to overcome, if we are to successfully program manycore systems, is our cherished assumption that we write programs that always get the exactly right answers."
The above statement falls in the realm of mystical . The writer Rodger Kamenetz writes in Burned Books,  that any word that has "seventy faces", meaning is infinitely interpretable,  could belong to the level of Scriptures and canonical texts.  It is esoteric. In spite of all texts, courses and information on Kabbalah, the real sensibility behind it, - is further removed from the great majority of people. Those writing below debate with elegance, to say the least.

 
 rauljara
Most of the time, this is not a trade off worth making. I can't think of a researcher that would willingly trade replicability for speed. I can't think of a mathematician who would base a proof on the idea that a number is probably prime....I can't think of an airline passenger who would be totally fine with the flight computer usually being pretty good. It would be a fine trade off for games, however. But given the choice between an answer that is correct, and an answer that is probably correct, I will take the correct answer.

Probability theory can be an extremely powerful tool when researching things that are otherwise difficult to reason about. And the theorem statement does not have to be probabilistic for the probabilistic method to be applicable. Just see http://en.wikipedia.org/wiki/Probabilistic_method http://en.wikipedia.org/wiki/Probabilistic_proofs_of_non-pro...
"I can't think of an airline passenger who would be totally fine with the flight computer usually being pretty good."
Actually, I would think it's pretty much the opposite. That is, the only type of airline passenger I can think of, is one who is fine with the flight computer (and the airplane in general) usually being pretty reliable. We already know that computers can malfunction and airplanes can crash. Now, of course, how reliable you want the airplane to be is up to you, but if you want it to be flawless, then you should never board an airplane.


It's not just the examples that are flawed. In most practical situations, provably correct answers do not exist. In most cases, one can only choose a level of certainty. Sometimes not even the level of certainty is possible to know.

Those are examples of problems that exist within the space of mathematics and number games we created for ourselves. Computers as we have them now are great for those.
However, when interfacing to the real, non-exactly specified, incomplete information world, "good enough most of the time" is not only fine, it is all you can hope for. For example, robots that need to navigate rough terrain, or in living organisms at the nano-scale, or communicate in human language.
There is a huge class of things that simply cannot be addressed with the current "correct" model of computation because they are not well-specified to the lowest level in the first place.
Computers, in other words, need to be more like brains. 

ajuc 
Our brains work that way, and we trust them.
If I understand correctly - no single neuron is neccesary for your brain to work ok. Brain does not depend on any single link working correctly. Somehow it works reliable enough.
It's possible to make reliable systems from unreliable parts.

It's difficult to say how reliable our brains are at doing things like driving cars or riding bikes because we don't really have anything to compare them to. I expect that it won't take long before we can build automated cars that are far safer drivers than any human..I suspect that if we cannot get deterministic behavior out of future computers because of the amount of parallelization required to make efficient use of their CPUs we will end up with 2 streams of computing , one of which will stay relatively static and be interested in using computers for the types of problems we currently do and another which will be interested in applying it to new problems that do not require determinism.

The software for these computers will be radically different so most likely you will have 2 computers on your desk (or in your pocket , or in VMs), one with < 10 cores and one with > 1000 cores.
I don't know much about how the brain works but I guess this is a process that uses a lot of heuristics and psuedo randomness that probably lends itself well to being parallelized which is why we set up our languages this way.

wrs 
For those making off-the-cuff judgments of how crazy this idea is: In 1990 or so, Dave Ungar told me he was going to make his crazy Self language work at practical speed by using the crazy idea of running the compiler on every method call at runtime. Then he and his crazy students based the Hotspot Java compiler on that crazy idea, which is now the industry-standard way of implementing dynamic languages. So now I tend to pay close attention to Dave's crazy ideas...

Comments

Popular Posts