Saturday, January 26, 2013

Higgs Boson: Think HTC instead of HPC.

What is Higgs Boson?

 The LHC (Large Hadron Collider) in Europe announced the appearance of a new particle among the pieces of smashed protons. It is called Higgs Boson, and it is believed to be the secret force that confers mass to matter.

Physicists have searched for it for years, but what is the Higgs boson supposed to do, exactly? A LiveScience infographic explains.
Source:LiveScience

Most of the people, including most educated ones, never heard of Higgs boson, even less understanding why it was hard to find it.
Why has the Higgs been so hard to find? It is only produced at very high energies, such as those in the Big Bang or generated in a particle collider like the LHC, and it breaks down almost immediately into a shower of other particles. "The probability of making a Higgs is so small that you are looking for one collision out of 10 trillion,

Most people think HPC 

Calculations were going on for twenty years, consuming lately  between 1,000,000 to 2,000,000 computing hours per day. If you think in terms of  HPC (High Performance Computing), flops and giant Super Computing machines, one may take the entire TOP500 list at a cost that can burden even more US national debt :-)
This computation does not end and will continue.
"After an enormous effort by LHC experimenters, the CERN laboratory and worldwide Grid computing community we are very excited to observe an excess in our data from a new particle consistent with the production of a Higgs boson," says UW-Madison Bjorn Wiik Professor of Physics Wesley Smith, who plays a lead role in the CMS experiment. "We will need the additional data planned from the running of the LHC until next year to establish if this is indeed the Higgs boson and that we stand at the threshold of a new era of understanding the origins of mass."

Our cherished assumption is wrong 

David Ungar, a manycore processor researcher said during an interview
The obstacle we shall have to overcome, if we are to successfully program manycore systems, is our cherished assumption that we write programs that always get the exactly right answers. This assumption is deeply embedded in how we think about programming. The folks who build web search engines already understand, but for the rest of us, to quote Firesign Theatre: Everything You Know Is Wrong!

Grid Computing versus Cloud Computing at CERN 

 The grid computing infrastructure was created, it handled 15 petabytes to 20 petabytes of data annually. This year, CERN is on track to produce up to 30 PB of data. "There was no way CERN could provide all that on our own," says Ian Bird, CERN's computing grid project leader. Grid computing was once a buzz phrase similar to that of what cloud computing is now. "In a certain sense, we've been here already," he says.
 The entire grid has a capacity of 200 PB of disk and 300,000 cores, with most of the 150 computing centers connected via 10Gbps links. "The grid is a way of tying it all together to make it look like a single system."
Internally, CERN is running a private cloud based on OpenStack open source code.  CERN and two other major European research organizations took steps to create a public cloud resource called Helix Nebula - The Science Cloud.

All is nice and groovy but there is a small problem: As Ian Bird says politely "we're just not sure of the costs and how it would impact our funding structure....  "From a technical point of view, it could probably work," he says. "I just don't know how you'd fund it.""

The French say:  "Le bon Dieu est dans le d├ętail" (the good God is in the detail) . In English we say "the devil is in the details."

Thinking  HTC (High Throughput Computing)

The unprecedented volume of computations for the Higgs Boson discovery was (and still is) carried out using the concept of HTC.
Open Science Grid services knit together researchers, many repositories of LHC data (UW–Madison is home to two research teams, one each for the two biggest experiments at LHC) and more than 100,000 computers at about 80 sites around the country.
 “It’s also a huge triumph for mankind,” says Miron Livny, CTO at the Wisconsin Institute for Discovery. “There were more than 40 nations that came together for a long time to do this one thing that — even if it all worked out — wasn’t going to make anyone rich. It’s a powerful demonstration of the spirit of collaboration.” 
This colossal computer power came almost for free.

HTC is about sustained, long term computation. You might think the difference between sustained long term computation and a short term sprint is merely quantitative, but this difference  really is a qualitative one.  What HTC is in essence sustained throughput over long times.

You would like to measure computational hours , per day, per week, per year, for example. These numbers are so large so we really care about sustained hours. For example OSG (Open Science Grid) delivers about 2,000,000 hours a day, plus or minus, 730 millions hours per year..

OSG is an opportunistic resource, so there are never guarantees about available resources, but on average there is a tremendous amount of capacity there. Each site of OSG is autonomous, locally owned and operated.

Getting people to think in a high throughput way helps a lot. There are still many machines idles that anyone can access for free, but, they are not HPC (High Performance Computing) resources. They may be only be idle for an hour or two. If we have a single 10,000 hour long job, it will never complete on the OSG. But if you are able to deploy the same task as a workflow of 10,000 one hour jobs, you could finish in one day. Statistical and Monte Carlo techniques are often very applicable in HTC and these are similar to the Higgs boson time consuming stochastic modelling .

Greg Thain  HTCondor guru,  teaching "Think HTC" at OSG Summer School 2012 

By Summer 2013 we will know 

On January 26 2013 Washington Post  writes:
The world should know with certainty by the middle of this year whether a subatomic particle discovered by scientists is a long-sought Higgs boson, the head of the world’s largest atom smasher said Saturday.
Rolf Heuer, director of the European Organization for Nuclear Research, or CERN, said he is confident that “towards the middle of the year, we will be there.” By then, he said reams of data from the $10 billion Large Hadron Collider on the Swiss-French border near Geneva should have been assessed.
 The timing could also help Scottish physicist Peter Higgs win a Noble Prize

Professor Peter Higgs  explaining what other call the "God Particle"

Unleashing "guerilla" science

 This is what Greg Thain, the "Think HPC" lead evangelist says:
You, Mr. Researcher are in a constant pressure to deliver results from a limited project funding. What will happen to your scientific project, if computation were really cheap? Because it is. So try not to think about being constrained by the amount of computation you have locally. What would happen if you could run 100,000 hours, one million hours? This is research. This is cheap. You can take risks. If you used 100,000 hours and still don't get the expected results, you still have the ability to analyze what happened and try again. No one will cut your funding. Quite the contrary.

Aknowlegements

Greg Thain,  from HTCondor project, Derek Weitzel, Bosco architect and  free thinker.

Disclosure

The opinions expressed in this blog are personal. Yet I am a member of the Bosco team, the quintessential "Think HTC" open source product one can try and use for free. You can download it from here,

Saturday, January 05, 2013

The Experiment

This blog entry is a follow up to the many articles  in this blog about Highly Functional Autistic (HFA) employment and the project Dandelion.  See Silicon Valley and Autism. A creative approach. and Outsourcing to the Autistic Rather Than to India

 Ahronovitz eyes robotic future - Rocklin  from August 2012, mentioned that David,  my HFA son and  a Whitney High grad will attend Sierra College's Mechatronics Program. He did attend the mainstream classes of professor Steven Gillette.

In December 2012, he did not pass the final exam. We treat this first semester as an experiment. The definition of an experiment, according to Wikipedia
An experiment is a methodical procedure carried out with the goal of verifying, falsifying, or establishing the validity of a hypothesis. Experiments provide insight into cause-and-effect by demonstrating what outcome occurs when a particular factor is manipulated. 
The experiment proved that without any special assistance, most HFAs can not graduate from college. The famous myth of "Einstein was Genius with a delay development"  is not proven by facts. Just read the Einstein Biography by Walter Isaacson . The great majority of HFAs are not geniuses, as the great majority of mainstream people are not geniuses either.

Professor Steven Gillette keen and sensitive spirit of observation
“David clearly has intellectual and social abilities that lend themselves to academic and to employment opportunities. He displayed a strong desire to work independently, particularly with the programming. He was able to work with other students, and seemed to enjoy his interactions with them. I believe that David can succeed in academic and technology endeavors, with sufficient levels of support. Academically, that support must provide him with focus on timely submittal of labs and quizzes. I found I had to directly request his lab reports and quizzes. Without direct oversight he did not submit the assignments required by the class.
There is only one problem: Shadows (1x1) support for HFAs are not available  (yet) in colleges. As a national premiere , Alta California approved the funding for a student to work with David as a shadow when he repeats the same class in the first half of 2013.

David working with his gifted tutor, Roman Sitruk
The Mechatronics course is not easy. Just browse through the textbook Industrial Automated Systems: Instrumentation and Motion Control  at Amazon.The course is in a lab, with specialized equipment costing perhaps as much as $100,000. The graduates from this class will have special set of skills that will make them employable for life. Automated and computerized systems are everywhere.

Most great discoveries start with failed experiments. So we start now phase 2 of the experiment.  A success here will be huge. We will learn what additional services are needed to educate HFAs in mainstream colleges together with mainstream students to acquire for life employable skills in great demand.  The next step will be to create a pilot program and get funded for five to ten HFAs students - which we identify as suitable for Robotics classes.

This is the main difference between project Dandelion, and other similar projects, like Specialisterne from Denmark: the Danish organization wants to train directly the HFA candidates for employment. This does not scale, the payoff is very slow.

We want to create an easy to scale solution, whereby we enable the Colleges and Universities in the United States and Canada, -  among the finest in the world. - to prepare HFAs people to get skilled jobs, just like anybody else.

How? I hope our experiment will give us more answers, as we pivot towards a solution

Update March 8, 2013

David passed with mark of 79 his first test this semester. He has never passed even one test last semester, without a tutor.

Acknowledgements:  Professor Steven Gillette and Mark Kwoka,  from Sierra College;  David Rydquist and Denise St. Clair from Alta California Regional Center. Dr. Marjorie Solomon, and Susan Bacalman  from UC Davis MIND Institute who seeded the idea of Dandelion. Special thanks to Wolfgang Gentzsch who knows David since he was a child was a constant source of positive energy over the years

Blog Archive

About Me

My photo

AI and ML for Conversational Economy