Monday, May 27, 2013

More about Waxahacie. Why we need it badly.

I dug out further in Leon Lederman  1993 book God Particle. If the universe is the answer, what is the question?  as a continuation of my article Waxahacie, Fermilab and President Obama.

This is how the road from Miletus to Waxahacie looked like in the mind of the Nobel Prize physicist

Figure 1: The road leading to Higgs discovery should have had gone
 through Waxahacie
Refering to the 80's and 90's, Lederman wrote
...something pervades this empty space and provides atoms with an illusory weight. That "something" will one day manifest in our instruments as a particle.... In the meantime (1993)  nothing exists except atoms and empty space; everything else is a illusion,
Using the benefit of the hindsight, 20 years later - 2013 -  here is how the last segment from the red circle appears to be today.

Fig. 2: The final segment from Leon Lederman Roadmap
from Miletus to Waxahacie
 For over half a century we looked to discover the Higgs boson. We did: not in Waxahacie,(outside Dallas Texas)  but in Switzerland.

We could divide the time in two parts:

  1. The world hoping that Higgs boson exists 
  2. The world knowing that Higgs boson exists
Here is the irony. Now post-Higgs-discovery age, LHC is shut down for two years, Fermilab Tevatron is closed  and we do not have on the entire planet not even one collider decent enough to do high energy physics, The only dot in the world after Higgs discovery is the Burger  King. They are still in business.

The SSC Waxahachie site was "for sale on an “as-is basis.” On February 2012, a local company with 250 employees , Magnablend, bought the property. Here is the FAQ on why they did it. and a quote:

Q: Will Magnablend use the tunnels?
A: No. The tunnels have already been closed, filled, and are inaccessible. We will continue to keep them closed and unused.

Interesting what it will happen now. I imagine Maganablend will happily sell the property at a handsome profit. The employment SSC will generate is exponentially higher than Magnablend. But this is a headache, a battle no one seems prepared to fight. 

Except a few bloggers like me.


View Larger Map

Waxahachie has an exotic name, but it's is just a few miles south of Dallas, Texas It has the best hotels and  has the best steak restaurants in the nation.

Thursday, May 23, 2013

SAP to hire hundreds of autistic employees. Thank you SAP!

The headlines are everywhere:




This is a great victory and the magic power of self promotion of Specialisterne Chief Executive Steen Thygesen paid off for a good cause.

However the idea to have a large company open up and hire methodically autistic spectrum employee was started by Ahrono Associates. It served as a good inspiration to Specialisterne, whose focus was previously to collect  $1 million in charitable donations for training 50 autistic employees per year and heavy government subsidizing in Europe.

The project Dandelion web site snap shot is shown below:

This blog has many references at what we think. We think that employment starts through education in University and communities colleges.

As a professional, I am involved in a team to create system software that has a superior UX (User Experience). See Bosco  web site

This tool ideally will enable any user, including autistic people, to access Supercomputers and large scale data centers situated all over the world. This is the computer technology that lead to the discovery of Higgs particle in physics or to a more down to earth studies, like for example predicting droughts and agricultural harvests all over the world, as our planet is four degree warmer than about 30 years ago.

I intend to have my son David Ahronovitz test the Boisco's quick startup.

The idea of a customer centric is all of a sudden in vogue. Intel, other high tech colossus thrives on it.

Nice. The question is : who is that user we must center on? We should include autistic employees in defining that user.

We see new realities taking shape. and Thank You SAP, for being the first!

Sunday, May 19, 2013

A conversation with Dr. Hans Meuer and Dr. Horst Gietl. Originally published in HPCwire

The House of Lords and Super Computing

About one year ago, in April 2012, the House of Lords invited the International Supercomputing Conference  ISC'13 General Chair, Prof. Dr. Hans Meuer, Prometeus and University of Mannheim for a presentation with a rather provocative title: Supercomputers - Prestige Objects or Crucial Tools in Science and Industry.

Dr. Meuer co-authored the paper with Dr. Horst Gietl , Executive Consultant, Prometeus.


Fig. 1 Dr Hans Meuer  (left) with Lord Laird
Why would the venerable House of Lords be interested in Super-computing? The 2nd Lorraine King Memorial Lecture was hosted by Professor the Lord Laird of Artuigarvan, a former computer programmer.

Finally, a professional computer expert is part of the House of Lords, as perhaps a great recognition of the roles Computer Science and IT played in our society and in the production of wealth. This is the House of Lords of the 21st century.



Second, they invited a worldwide expert, like Dr. Meuer, who is not British, but German. In the absence of a Nobel Prize for Computer Science, the Lorraine King Memorial Lecture may become – why not - one of the more prestigious events to honor great men and women advancing the computer industry.

Dr Meuer told his audience about the TOP500 supercomputer project, which was launched at the University of Mannheim, Germany in 1993. It is the only project in the world that has been successfully tracking and evaluating the supercomputer market for 20 years. Two TOP500 lists are published per year, one at the International Supercomputing Conference in Germany in June and one at the SC Conference in the U.S. in November.

The distinguished audience learned that the UK ranked 4th in the Top500 list of Supercomputer using countries and that France was the only European country with any capability to manufacture supercomputers. With true British sense of humor,  the Lords reaction is fittingly described by one blogger reporting the event:
Clearly more needs to be done by the likes of the UK or Germany to remain competitive in the Supercomputing stakes, which begged the question, (as posed later by an attendee), of whether these machines were nothing more than objects of geopolitical prestige, superiority and / or bragging rights, (e.g. My Supercomputer is faster than yours, so Nyah-nyah, nyah-nyah nyah-nyah!)
Lord Laird summarized this by saying that the computer industry has "a certain lack of visibility. "If we don’t know who you are, or what it is you want, then that is entirely your own fault!

Fig. 3:  Professor the Lord Laird of Artuigarvan,

Soccer and Supercomputing

Using hindsight, the words of Lord Laird value the entrepreneurial spirit and great efforts to bring super-computing to the world attention, against skepticism and ironic smiles.  Hans Meuer is a chess player and  - like me - a soccer aficionado. and he asked me the first question:

Hans: Do you know what my favorite soccer team is?

I watch soccer on GolHD and Fox Soccer TV channels in California

Miha: Bayern? Dortmund?

Hans: TSG Hoffenheim

Miha: Hoffen... what? Are they in Bundesliga 1?

Fig. 4 TSG Hoffenheimer "in the middle of nowhere" :-)

Hans: Hoffenheim is a small village of 3,000 people about 15 miles south of Heidelberg.  As you say in America, it is in the middle of nowhere.  When I came here in 1974, the team played the lowest possible league in Germany, called Kreisliga, about seven levels below Bundesliga 1.  Dietmar Hopp, one of co-founders of SAP, spent his childhood in Hoffenheim and played soccer. He invested in the team and in 2008 we entered Bundesliga 1.

(Note that Dietmar Hopp is on the Forbes list. He is the #185 richest individual in the world with a net worth of $6.5 billion.)

Top 500


Miha. After 20 years, the TOP500 list you helped create became a prestigious membership coveted by every Supercomputer team, manufacturer and country anywhere in the world. How did it all start?

Hans: Erich Strohmaier and I came with the TOP500 idea at Mannheim University at the beginning of 1993. Later, we knew we needed US to buy the concept. I asked Jack Dongarra (father of Linpack) to become one of the authors from the very beginning, Horst Simon became an official author in the year 2000.  We are four TOP500 authors:  Meuer, Strohmaier, Dongarra, Simon.



Erich Strohmaier describes the Top 500 experience elsewhere:
“When we started this, it was to gather statistics for a small conference. We never expected the scope and popularity to grow as it did,” 
It took two or three years for the list to find its footing. Initially, a number of manufacturers were reluctant to provide the necessary data....only those who were sure they would have a good showing submitted their data to us...Some companies don’t want to be listed because they see their systems as giving them a competitive advantage and don’t want their competitors to know either the size or type of their machines... some centers are conducting classified research and say, ‘Thou shall not publish our system.’
Some institutions are reluctant to devote their entire supercomputer to running the Linpack benchmark. Linpack, they said does not represent a real workload and therefore skews the performance levels
“That’s all in the spirit of the game—we have a number of big players, but also many of the smaller players are very proud, and that shows how important HPC has become to the research community” 
To get to where it is today, TOP500 ran the same roller coaster as the TSG Hoffenheim soccer team. Hans Meuer and his partners created the TOP500 ex nihilo many years before Lord Laird’s witticism: “If we don’t know who you are, or what it is you want, then that is entirely your own fault!”

Regarding Linpack, sure the benchmark has limitations. It scales very well, but it is not  per se a guide to select a supercomputer. The ideal supercomputer for you is the one that runs the applications you are going to use best, within the maximum budget you have. And in terms of performance, a ranking of 450 can be much better for you than a ranking of 400.

The main virtue of Linpack is its’ proven ability to forecast the future of HPC

Figure 5: The well known graph of Moore's Law for Supercomputers

Miha  How would you explain this success in 2013 to young group of people?

Hans: The success of the TOP500 is based on the fact that it is the only tool available for evaluating the HPC market since 20 years and that we have introduced from the very beginning a competition on different levels: between manufacturers, between countries, and between sites. People like competitions because they like sports.

ISC'13 will host the second HPCAC-ISC Student Cluster Challenge one of the most popular young people event aspiring to become HPC gurus. In April 2013 Asia Student Cluster Challenge (ASCC) hold a competition to decide the two teams who will travel to Leipzig, joining teams from USA, Scotland, South Africa, Germany, Costa Rica

The Ant Algorithms, non-centric HPC, Big Data, Bosco

Miha: IDC predicted in 2010 that in 2013 "most of the biggest, baddest supercomputers are architectural clusters or x86 MPPs with bulked-up interconnects and support for MPI or PGAS languages." IDC calls this "evolutionary change." What about some revolutionary change?

Hans: If the revolutionary change means the availability of GPGPUs, then we have already the revolution. I doubt that there will be any manufacturer producing only chips for use in HPC-systems. The market is not big enough. But there will be developments like Intel’s MIC multiprocessors or further developments for GPGPUs, not to forget IBM and Fujitsu, that will drive the HPC performance increase, but in an evolutionary way.

Miha:  You often said multi-core processors will be significant in HPC. In what way?

Hans: Multi-core processors are the basis of all HPC-systems worldwide. This will not change in the near future because currently it’s the only way to speed up system performance. Therefore, we will see HPC systems with millions of cores. The real problems with this extremely large number of cores are:

  • Memory bandwidth can’t cope with the processor speed.
  • Programming of millions of cores is becoming a nightmare

Miha:  David Ungar from IBM, leading researchers in "many core" processors programming, proposed to do away with node synchronizations and determinism. He abolishes "our cherished assumption that we write programs that always get the exactly right answers." Will this be applicable in HPC?

Horst: The title of your reference -  Many Core processors: Everything You know (about Parallel Programming) Is Wrong! -  is revealing. A few comments:
  • If you have an application that is running only on 100 cores with an acceptable performance and to run it on > 100 cores doesn't bring any performance improvements, than I would say: The app is limited to 100 cores and there is nothing wrong with it.
  • Programming without any synchronization is counter-intuitive, not only from a mathematical point of view. If two cores are solving one problem in 99% of all cases there will be some synchronization between the two cores. Otherwise, the two are solving different problems that have nothing in common. 
  • Example: If you and I are doing a search operation in the Web, then our requests have nothing to do with each other; no synchronization required.
But if any app has to search a tree and the search will be split onto two cores (each core responsible for different branches of the tree) then at the end both cores have to synchronize to show me the result.

Miha:  What about the ant colony optimization algorithm (ACO), and other algorithms which will be thriving in many-cores processors? The project Renaissance deals with this topic: IBM Research, Portland State University, Vrije Universiteit Brussel are sponsoring jointly this project.

Horst:  Many-Core systems are not only suitable for ant algorithms. This seems to be an obvious coincidence. But many-core systems are the basis for most of the technical and scientific applications that exist; including Big Data algorithms.

Ant algorithms are suitable for optimization problems from combinatory, i.e. Travelling Salesman problem. The theory behind it is heuristic optimization problems, meaning it cannot be guaranteed that there exist an optimal solution or the optimal solution cannot be found in an acceptable time.

If you look at Wikipedia, ants use the environment as a medium of communication. They exchange information indirectly by depositing pheromones, all detailing the status of their "work". The information exchanged has a local scope, only an ant located where the pheromones were left has a notion of them.
Even here the word ‘medium of communication’ is mandatory.

For me I only know one synchronization free algorithm and that’s ‘Video on Demand’ because two viewers, even if they watch the same movie at the same time are totally independent of each other. And if the bandwidth for accessing the same copy of a movie twice is sufficient then I would say you don’t need any communication between the two viewers (on system level).

Miha: What about Big Data and many-core processors?

Horst: Multi-Core processors and GPUs have turned almost any computer into a (heterogeneous) parallel machine pushing compute clusters and clouds. It is not a secret that general Multi-Core Systems are often overloaded with Big Data Analytics. One alternative would be Data Centrism, meaning the memory is in the center and the CPUs are at the periphery and are directly working on the memory thus avoiding data transfer. The realization of this alternative is not easy but 2020 seems to be a reasonable deadline.

Miha: IDC predicts in 2013 "HPC Architectures Will Begin a Long-Term Shift Away from Compute Centrism." Do you agree?

Hans: The Long-Term Shift of HPC architectures away from Compute Centrism seem to be a must. Today, one has the CPUs/Cores in the center and the Memory at the periphery. This means one always has to transfer data to the center to do the calculation. But the data transfer is limited (memory bottleneck).  The existing HPC systems can only transfer less than one byte per floating point operation.

Miha: Have you heard of Bosco?  We made this tool to make scientists more comfortable using clusters. Everyone prefers a Mac to working with a cluster. Do you see a need for it in HPC?

Hans: What we at ISC have heard from Bosco is really great and we will see how it will spread over the HPC-community. It really seems to make life easier for researchers to submit their jobs to remote clusters. We will think of having a session about this topic at the ISC’14. We are absolutely sure that there is a need for such a tool in the HPC-environment.

Miha: High Throughput Computing (HTC) - recently made headlines as it contributed to Higgs particle big-data research at CERN.  Many think HTC and HPC are converging.  How do you see it happening?

Hans: The problem is the word ’Converging’. In the future there will be a lot of HPC-applications (as it is already today) where numerically intensive calculations are executed on a vast amount of data; i.e. combustion calculation in an engine.

HTC calculations will operate on extremely large datasets but are executing (in general) only few numerical calculations on them, i.e. take the search engines and the Big Data research at CERN for the Higgs particle.

Now the coupling - not the converging - between HTC and HPC is coming. In the future HTC and HPC will have a strong coupling for Big Science. You should attend ISC’13, where we have established a session exactly for these topics.

Miha: Have you seen this University of California San Diego (UCSD) press release ? They used Bosco to link the HPC Gordon Super Computer to OSG (Open Science Grid),  an HTC resource.  The results improved in a spectacular manner.

Hans And I would love to cover this topic at ISC Big Data'13 conference in Heidelberg, September 25 and 26, 2013. Sverre Jarp from CERN is the conference chair. We just have started the preparation of this event.

The Fascinating Leipzig

Miha: Regarding ISC'13 in Leipzig, why Leipzig? It seems a town that inspires and super-computing people are incurable dreamers.


Fig. 6  Steven Black 2004.02, oil on canvas, 2004, 39'37" x 59'06" courtesy Galerie Saheb
New York Academy of Art 
http://nyaa.edu/nyaa/exhibitions/past/leipzig.html



Hans:  Spiegel Magazine says Leipziz is the new Berlin.
"Berlin used to be Germany's hippest city, but the once scruffy capital has long since succumbed to gentrification. The latest city to attract the creative class is the former East German industrial seat of Leipzig. Moving in by the thousands, they are lured by the euphoric buzz of cheap rent and youthful ingenuity.
Before the sun sets, it pierces the clouds once again as a glowing red orb. People stream from turn-of-the-century villas and communist-era concrete apartment complexes and rush to the park. Adventurers and hedonists, painters, students, punks and Internet entrepreneurs come alone and in groups, on bicycles and skateboards, with guitars and cases of beer tucked under their arms.".
In November 2012, The Green Globe designated  the Congress Center Leipzig, as the Best Congress and Convention Center in Europe . The ceremony took place at the Business Destinations Travel Awards 2012 in London. Watch the amazing slideshow to see why..

The quintessential Leipzig 2013
ISC'13 web site has more information on the City of Leipzig




Saturday, May 11, 2013

Waxahacie, Fermilab and President Obama

This is a quote from my article HTC, Big Data and the God Particle
Fermilab in Batavia, 20 miles east of Chicago was the leading high energy physics facility. The Large Hadron Collider (LHC) at CERN was built from 1988 to 2008 and started operations in 2009. Before LHC, Fermilab's Tevatron circular particle collider was the biggest in the world. For about thirty years, it was the number one particle collider serving high energy physics where the need for big data was prevalent long before this term became a media buzzword. On September 30, 2011, the Tevatron was closed during an emotional ceremony.
This was not the initial scenario. In his 1993 book God Particle. If the universe is the answer, what is the question? Leon Lederman, Fermilab director at that time.wrote the following prophecy to his imaginary Bible:
And it came to pass, as they journeyed from east, that they found a plain in the land of Waxahacie, and they dwelt. And they said to one another, Go to, let us build there a Giant Collider, whose collisions may reach back to the beginning of time. And they has superconducting magnets for bending and protons had they for smashing.
And the Lord came down to see the accelerator, which the children of men builded. And the Lord said, Behold the people are confounding my confounding. And the Lord sighed and said, Go to, let us go down and there give them the God Particle, so that they may see how beautiful is the the universe I have made.     The Very New Testament, 11:1
According to Wikipedia,
The Superconducting Super Collider (SSC) (also nicknamed the Desertron[1]) was a particle accelerator complex under construction in the vicinity of Waxahachie, Texas, that was set to be the world's largest and most energetic, surpassing the current record held by the Large Hadron Collider. Its planned ring circumference was 87.1 kilometres (54.1 mi) with an energy of 20 TeV per proton... The project was cancelled in 1993 due to budget problems.
SSC would have been three time larger that current CERN Large Hadron Collider (LHC)

Leon Lederman was the prominent early supporter and architect of SSC. President Clinton tried to stop the cancellation of the project  by asking Congress to continue "to support this important and challenging effort" through completion because "abandoning the SSC at this point would signal that the United States is compromising its position of leadership in basic science".

 When the project was canceled, 22.5 km of tunnel and 17 shafts to the surface were already dug.


In 2006, the site was bought by billionaire Johnnie Bryan Hunt for 6.5 million dollars. A former truck driver, Hunt was known throughout his life for his generosity, and was famous for carrying around a money clip containing $100 bills, which he would hand out to people he felt could use the money  Six month later, he slid on a patch of ice, fractured his skull and died.

J.B. Hunt

After his death, the SSC site in Waxahatchie is deteriorating day by day.

The total cost of SSC was initially calculated as $4B . Costs over-runs increased this amount to $14B.

The property is now for sale on an “as-is basis.” “Owner is motivated to sell the property to a qualified investor who offers the most competitive terms including price and closing contingencies,” the advertisement for the property says.



The annual cost of the war in Iraq was $150 billion in 2011. The SSC could have had been build for a cost equivalent to five weeks of war in Iraq.

Public opinion associated high energy physics with nuclear weapons. Here is a photo of an inept idea, an atomic big gun, one can still buy as a toy kit from the The National Museum of Nuclear Science and History



Higgs particle discovery would have had been delayed if LHC didn't exist. This Big Science is not about nuclear weapons. It is about huge progress in humanity, including computer sciences, big data, material engineering, High Throughput Computing and High Performance computing. It is about the huge know how Fermilab and the US HEP community accumulated over the years. It left our top US  scientists 100% dependent on facilities available outside the country

The Large Hadron Collider is closed for two years for repairs. LHC and CERN  are great contribution to science from European Union. But United States share an obligation, along Europe to make more substantial contributions and revive the leadership  that both Europe and America share.

The discovery of Higgs, God particle is a wake up call. Come on America, wake up!

If we made for the first time in history the Olympic Games profitable, if we boast the best business schools in the world (Harvard, Stanford, Wharton and the like), why can 't we build a revised Superconducting Super Collider 2 (SSC2) project in a profitable way? One possibility, is to create an International corporation-like open to by-invitation investments from all over the world.

If I were president Obama, I would ask a consortium of research, academia and business - what a pity J.B. Hunt passed away too early - to set up a feasibility committee for a state of the art SSC2.

I would place the leadership of this effort in Fermilab. A super collider designed from the huge experience accumulated with the Higgs boson discovery

But I am not president Obama. I am just a blogger with outsiders' insight.

Most presidents build libraries. America has many great libraries. But as we live in times of e-readers and Kindle, it is a good time to look at alternatives. After the closure of Tevatron, America does not have any high energy colliders capable to handle the Big Science in the 21st century.

There is a Japanese word: "sokojikara." When applied to US, it means a reserve power that allows it to overcome the inadequacies of its leaders and the foibles of its citizens.

Somebody has to create the plan for SSC2. These are the special people who America bred or attracted from all over the world. Some are well known. Some are still underdogs.

Presidents sign documents when the plan is ready. All they have to do is to say Yes! and sign. There is some politics, diplomacy, technical genius, public relations and grass root support to make it happen.

President Obama reiterated (April 29, 2013) his strong support for science and technology to members of the National Academy of Sciences at its 150th annual meeting. Science, technology, engineering, and medicine are critical to the nation’s prosperity, Obama said, noting that investments made today are bound to pay off for many years to come.

Just bookmark this blog.

Monday, May 06, 2013

Paradyn/HTCondor Week 2013. The fulfillment of dreams.

Which dreams, you may ask? 

Leon Lederman dream

One is the dream of Leon Lederman, Ph.D., Nobel Prize in physics 1988, director emeritus of Fermilab in Batavia, Illinois. He describes it in his 1993 book God Particle. If the universe is the answer, what is the question?.While still in his pajama. Lederman meets the Greek encyclopedic philosopher Democritus,


The dream takes place in  some times before 1993.
Lederman: We suspect that mass comes from a  field.
Democritus: A field?
Lederman: Our theoretical physicists call it Higgs field
Democritus: Higgs? What is a field?
Lederman: A field is represented by  a particle we call the Higgs boson.
Democritus: And have you found this Higgs particle in your accelerators?
Lederman: Well, no.
Democritus: So you found it where?
Lederman: We haven't found it yet. It exists only in the collective physicists mind.
Democritus: So why do you believe in it?
Lederman: Because it has to exist...  I suggest you come here in Batavia in 1995.

Miron Livny's dream

The concept of  HTC (High Throughput Computing) germinated in the work of Professor Livny at University of Wisconsin, Madison in 1983. At the  HTCondor Week 2013 they celebrated a great victory. Here is what HTCondor did


Is this Big Data? If you can have $1 for each collision,  the one million billions dollars mean 14x the Gross World Product of 72,000 billion dollars in 2012.

This is one important segment of the team  of computer scientists  that helped making the discovery of Higgs boson aka God Particle.

HTCondor Team 2013. Seated are Todd Tannenbaum and Miron Livny
Leon Lederman invited too early Democritus to Batavia. He said it will be 1995. It was not 1995, but 2012, seventeen years later. It took much, much more than the custom made  DAQ (Data Acquisition Computer) from 1993 Fermilab, which seems a midget compared to the computer resources OSG using HTC can locate, mobilize and utilize today from all over the world.

How does it feel to attend a Condor Week                                                        

This is an event that has soul. We cut a huge cake for Miron Livny's birthday. 

The features of the next HTCondor 8 were presented as cooking competition, complete with apron chef, like a TV show.

The amenities at Morgridge Institute were state of the art and its' architecture inspiring. 

The idea of Bosco, which starting  from HTCondor 7.9.4 + is part of the download, is to make Condor power accessible to any one with a  laptop. One of the reason the big media like  New York Times did  not come yet to interview people from the Center of High Throughput Computing is because HTCondor is not as easy to use as Mac. This will change, fasten your seat belts.

Jason Stowe from Cycle Computing said in Big Science, the applications with high parallel throughput grow much faster the MPI applications.

This decade we have a plethora of new names: Exa-scale, Big data, Software Defined Networks (SDN), Dark data. Spot Pricing, Advanced Computing Infrastructure (ACI), and so on.

HTC and HTCondor will evolve. Miron Livny said:
“...  we have identified six key challenge areas that we believe will drive HTC technologies innovation in the next five years.”
Evolving resource acquisition models
Hardware complexity
Widely disparate use cases
Data intensive computing
Black-box applications
Scalability
 The success in the discovery of the Higgs particle show how valuable the know how of Fermilab was for advances in computer science and in physics. This collaboration must continue and will continue in a newly created symbiosis.

This is where the project dV/dT: Accelerating the Rate of Progress towards Extreme Scale Collaborative Science starts.
And here is man with dream behind the God Particle and who indirectly motivated the whole thing. . 

Leon Lederman, Nobel Prize 1988
Immediately after HTCondor Week, I visited OSG at Fermilab. More about it in a future entry.

Additional references:

1. Marco Mambelli blog with notes from Paradyn/HTCondor Week 2013

Blog Archive

About Me

My photo

AI and ML for Conversational Economy