Monday, April 29, 2013

Product Creation: Bosco as a single interface to access both HPC Supercomputers and HTC clusters

What is the role of a product manager in open source teams? The lean business  startup principles  apply, but instead  of paying users we have scientists competing to  get results first.
Risk increases if one does not discover what future users really want
We found our that a team at University of California San Diego (UCSD) SDSC’s Gordon Supercomputer Assists in Crunching Large Hadron Collider Data. used Bosco to connect Gordon supercomputer  to OSG grid  in order to
do rapidly processing raw data from almost one billion particle collisions as part of a project to help define the future research agenda for the Large Hadron Collider (LHC).
According Lothar Bauerdick, OSG Executive Director,
 Giving us access to the Gordon supercomputer effectively doubled the data processing compute power available to us,”
Dan Fraser and I called Frank Wuerthwein , who is the soul of this project and a top researcher in the field of dark-matter.

We discovered that Bosco - off the shelf - needed some customization. Why? Because Bosco was never designed for this double purpose to access both HPC (High Performance Computing) and HTC (High Throughput Computing)  resources. Derek Weitzel from Bosco helped the team in San Diego. How Frank W. and the team did make Gordon work may be subject of an amazing article. They did it, and without the engineering creativity, no product manager can pull out a solution. What the product manager can do is to find out what the scientists really want. We to go to and fro from developers to the users, until we know what to deliver with minimum risks.

Risk is lower if we frequently ask Joe users what they really want
Gordon is part of XSEDE Extreme Science and Engineering Discovery Environment, which consists mainly of supercomputers. The name is impressive, but intimidating for a talented  user with laptop, normally a Mac trying to reach.Gordon and the like

Self-explanatory.

I wrote to the team:
Can we work closely with UCSD team to make a Bosco-based two way interface to submit jobs from OSG to a designated XSEDE resource(s)? This is a special user case, that perhaps can be applied in other situations.
Dan Fraser has a special talent to find from a pool of talented engineers the ones with the synergy to make a functional  team ready to deliver. He recruited Mats Rynge who actually wrote the final Abstract we will present at XSEDE13 as a poster with the idea.

Excerpts
"Moving job submission and management close to the user and to systems they are already familiar with, will make using XSEDE resources accessible to users who might not have much UNIX and HPC experience. The model also makes it easier for users to access local campus clusters one day and then an XSEDE resource the next one. Bosco can also provide an interface to XSEDE for gateways and other portal systems.
In this Poster we demonstrate how users can easily download, install, and use the Bosco capability for managing distributed computing jobs from their desktops."

 The poster was accepted. Here are the reviewer comments.

----------------------- REVIEW ---------------------
PAPER: 243
TITLE: Bosco - A Simple Interface for managing jobs on both XSEDE and Campus computing resources
AUTHORS: Derek Weitzel, Daniel Fraser, Miha Ahronovitz and Mats Rynge
----------- REVIEW 1 -----------
Poster will present Bosco, a tool for bridging the gap between desktop users and HPC resources.  Such tools are key to making cyberinfrastructure more accessible to a broader audience, and the abstract indicates that this application is fully featured and ready for XSEDE users to install and use.
----------- REVIEW  2 -----------
This abstract describes an interface to submit tasks to cluster computing, regardless if it is a HPC in your institution or XSEDE resources. Apparently, Bosco provides the easy to use interface to submit tasks to heterogeneous resources and even facilitates to switch a task from one HPC to another. This topic is indeed relevant to the XSEDE 13 because creates a transparency to use HPC resource for new scientist that needs to interact with XSEDE resources.
----------- REVIEW 3  -----------
A tool that allows users easy access to multiple types of resources  - including different OS versions - is always of interest to this community.

A quote from the interview with Hans Meuer, the International Supercomputing Conference  ISC'13 General Chair,  also shows the hope to make the access to the TOP500 supercomputers more easy for simple human beings
Miha: Have you seen this University of California San Diego (UCSD) press release ? They used Bosco to link the HPC Gordon Super Computer to OSG (Open Science Grid),  an HTC resource.  The results improved in a spectacular manner.
Hans And I would love to cover this topic at ISC Big Data'13 conference in Heidelberg, September 25 and 26, 2013. Sverre Jarp from CERN is the conference chair. We just have started the preparation of this event.



Monday, April 22, 2013

A conversation with Dr. Hans Meuer and Dr. Horst Gietl. See HPCwire article



Here are excerpts from an article I wrote for HPCwire. 


About one year ago, in April 2012, the House of Lords invited the International Supercomputing Conference  ISC'13 General Chair, Prof. Dr. Hans MeuerPrometeus and University of Mannheim for a presentation with a rather provocative title: Supercomputers - Prestige Objects or Crucial Tools in Science and Industry.

Dr. Meuer co-authored the paper with Dr. Horst Gietl , Executive Consultant, Prometeus.

Why would the venerable House of Lords be interested in Super-computing? The 2nd Lorraine King Memorial Lecture was hosted by Professor the Lord Laird of Artuigarvan, a former computer programmer.

Dr. Hans Meuer, center, at the House of Lords. Lord Laird is on the right
Lord Laird  invited a worldwide expert, like Dr. Meuer, who is not British, but German. In the absence of a Nobel Prize for Computer Science, the Lorraine King Memorial Lecture may become – why not - one of the more prestigious events to honor great men and women advancing the computer industry.

The distinguished audience learned that the UK ranked 4th in the Top500 list of Supercomputer using countries and that France was the only European country with any capability to manufacture supercomputers. With true British sense of humor,  the Lords reaction is fittingly described by one blogger reporting the event:
Clearly more needs to be done by the likes of the UK or Germany to remain competitive in the Supercomputing stakes, which begged the question, (as posed later by an attendee), of whether these machines were nothing more than objects of geopolitical prestige, superiority and / or bragging rights, (e.g. My Supercomputer is faster than yours, so Nyah-nyah, nyah-nyah nyah-nyah!)
Lord Laird summarized this by saying that the computer industry has "a certain lack of visibility:

 "If we don’t know who you are, or what it is you want, then that is entirely your own fault!

The Conversation. Some Excerpts

Miha: IDC predicts in 2013 "HPC Architectures Will Begin a Long-Term Shift Away from Compute Centrism." Do you agree?

Hans: The Long-Term Shift of HPC architectures away from Compute Centrism seem to be a must. Today, one has the CPUs/Cores in the center and the Memory at the periphery. This means one always has to transfer data to the center to do the calculation. But the data transfer is limited (memory bottleneck).  The existing HPC systems can only transfer less than one byte per floating point operation.

Miha: Have you heard of Bosco?  We made this tool to make scientists more comfortable using clusters. Everyone prefers a Mac to working with a cluster. Do you see a need for it in HPC?

Hans: What we at ISC have heard from Bosco is really great and we will see how it will spread over the HPC-community. It really seems to make life easier for researchers to submit their jobs to remote clusters. We will think of having a session about this topic at the ISC’14. We are absolutely sure that there is a need for such a tool in the HPC-environment.

Miha: High Throughput Computing (HTC) - recently made headlines as it contributed to Higgs particle big-data research at CERN.  Many think HTC and HPC are converging.  How do you see it happening?

Hans: The problem is the word ’Converging’. In the future there will be a lot of HPC-applications (as it is already today) where numerically intensive calculations are executed on a vast amount of data; i.e. combustion calculation in an engine.

HTC calculations will operate on extremely large datasets but are executing (in general) only few numerical calculations on them, i.e. take the search engines and the Big Data research at CERN for the Higgs particle.

Now the coupling - not the converging - between HTC and HPC is coming. In the future HTC and HPC will have a strong coupling for Big Science. You should attend ISC’13, where we have established a session exactly for these topics.

Miha: Have you seen this University of California San Diego (UCSD) press release ? They used Bosco to link the HPC Gordon Super Computer to OSG (Open Science Grid),  an HTC resource.  The results improved in a spectacular manner.

Hans And I would love to cover this topic at ISC Big Data'13 conference in Heidelberg, September 25 and 26, 2013. Sverre Jarp from CERN is the conference chair. We just have started the preparation of this event.
The 
The quintessential Leipzig
The International Super Computing Conference 2013 will take place in Leipzig.

The Congress Center Leipzig, is designated as the Best Congress and Convention Center in Europe .  Watch the amazing slideshow to see why.

HPCwire link:

A Conversation with Dr. Hans Meuer and Dr. Horst Gietl 

Apr 22, 2013 | Gearing up for ISC'13, contributor Miha Ahronovitz examines where HPC has been and where it's headed. In this in-depth interview, Drs. Hans Meuer and Horst Gietl recount the origins of the TOP500 list and wax prophetic on the future of supercomputing, including the peril and promise of parallel programming as well as the long-term shift away from compute centrism.
Read more...










Saturday, April 20, 2013

Open Science Grid All Hands Meeting, 2013, Indianapolis.

Open Science Grid All Hands meeting 2013. I am seated with a  black shirt 3rd from the right.
 Next to me with a white shirt is  Daniel Fraser. Sixth from the right Prof. Miron Livny


A picture is worth a thousand words. Click on the photo to enlarge.

This photo places together one the brightest minds in Computer Science and High Throughput Computing in the nation. It is a great honor to be be in this photograph as a fan and admirer.

Tuesday, April 09, 2013

A conversation with Dr. Mine Altunay, Security Officer at Open Science Grid (OSG)

Dr. Mine (pronounced Meené) Altunay is the OSG''s (Open Science Grid) Security Officer.

In most people's mind, a "security officer" is a tough man or woman with a uniform. This is one image Google shows when searching these words:


I retried to google with the words "OSG security officer" and Google had an "Aha" moment:



Mine and I met at the Open Science Grid All Hands meeting in Indianapolis in March this year. She holds a PhD from North Carolina State University in Computer Engineering. She preserves an adolescent youth; she talks with that rare competence of the people who know what they are doing.  She is both modest and professional. Yet, she makes everyone feels the knowledge comes from them. This is a key ingredient to make the men and women in her team be on her side and collaborate willingly.

Mine Altunay, Ph.D.
Q: Mine, how would you describe your job at Fermilab?
A: I am an employee of Fermilab, where I head CSR (Collaborative Security Research) group, reporting   directly to the CIO, Irwin Gaines


Q::  And at OSG?
A:  I am also "on loan", so to speak, to be the Security Officer of the Open Science Grid. Fermilab is both a Site and VO (Virtual Organization) within Open Science Grid.

Fermilab is one of the premier Nuclear Research and High Energy Physics institutions in the world. This is where Leon Lederman won the Nobel Prize and coined the name of God Particle for Higgs boson. Since 2000 until 2009 Fermilab's Tevatron collider - the largest in the world at that time - led the search for Higgs boson. Fermilab and OSG managements both entrusted Dr. Mine Altunay's team to defend their strategic digital assets for the last seven years.

In an article earlier this year, Computing Bits   describes the CMS (one the detectors from the Large Hadron Collider at CERN) security drill - pointing out the key function Mine has:
In a bank, for example, high financial stakes form an attraction for professional hackers; conversely, the highly valuable scientific data here is too complex to be subject to a simple heist for a quick buck. So, why all this effort? First, it is important that scientific data remains secret before collaborations decide to announce their results in order to protect the integrity and validity of the analysis and the reputation of the collaboration. Secondly, amateur hackers, third-party software and leaks pose security risks. Moreover, says Altunay, “Our biggest concern is losing production time, losing effective use of our resources,” as well as a potential detrimental impact on the collaboration’s reputation.
Q: You mentioned Sites and VOs. What is the difference?
A:  A Site is a collection of resources. A VO is super-set of the site; it can include users, science projects, and experiments, collections and other sites. OSG provides security services for all the VOs and Sites, yet each VO has its own local security with their policies. We take care of the security of Open Science Grid software and policies. But there are common infrastructure needs, like the identity management for all VO as each has its' own policies of who can access their data. We can help providing operational security: compliance, oversight, and assessment.  We recommend, not impose. We make sure all risks are lower than threshold tolerable in all OSG's policies.



Fig. 1 The essential scientific paper about "A Science  Driven Production Infrastructure at OSG," published in 2011. Mine  is among the authors.

Q:  You said risks lower than... lower than what?
A:  It is not an exact scientific formula. We have a security team deciding how much is good enough 

“A feeling of false safety is much more dangerous than always being on our toes,” Altunay says.
“So far we’ve never had an incident that has prevented us from running, but this doesn’t mean we don’t have vulnerabilities. We are constantly thinking of our response to potential incidents: Will our communications channels be open? Is our technical knowledge up to date? Will everybody know what to do?”
Digital Preservation Management Workshops and Tutorial is an IT security 101 course on line. Quote:
Devise a “what if” scenario based on a worst-case situation and evaluate your institution's capabilities to protect digital assets

Q:  What is most important "To-do" for the safest grid possible?
A:  I will start by asking myself: "What is the usage of your most precious assets? What do you want to protect? What are the things that you are NOT going to tolerate?"  I start from here, because it is impossible to completely eliminate the risks of every possible attacks, - you will waste your time doing that. Which are the assets we select for maximum security and place various security layers to protect? This is part of our core activity.  It stretches us to discover novel ideas all the time.

Q: What about users' reactions?
A:  The certificates for mainstream OSG users are not popular, because they are difficult to learn and manage. No one uses certificates for Google, Facebook, and so on, so why should they use them in OSG? We want to make the certificate process completely transparent to the user. One way to achieve this - and we are working at this now - is the OSG portal may authenticate users based on users name passwords, and then automatically produce the certificates for the user.  Most VOs have their own front-end and they can run this sort of software once we have it ready.


Q:  Does it mean the new sign-on will  unburden the users from certificate headaches?
A:  Yes. We developed a single sign-on solution using certificates for access.  In addition, each VO still has its local access restrictions in place. This will be a win-win solution.


Q:: If you had a magic wand, what would you like it to do?
A: I will use it to make the certificates go away!  :-) We are getting there, but each front end has its own restriction and is evaluating the certificate-less solution. We also evaluate it in OSG itself. It is a big change that affects our entire security principles. For example user trace-ability If you don't give any certificate, how I am going to trace that user? If we don't have the certificate, then what can we put as a token to achieve this important trace-ability?


Q: Is this a new research topic?
A: Our research focuses on what happens when we take away the certificates - as an experiment - . I think the technology is there, and this is what I am working at right now. The secret is to get rid of certificate in a secure manner, for the end users. We will be using certificates for other things like service certificates, between the machines, but our main focus is to make user certificates transparent


Q; This is a leading edge...
A:  It is research, as opposed operational day to day security assurance. We get a lot of encouragement and support from OSG's top management.

Post Scriptum

Last year, Mine was interviewed at Cloud Connect Chicago where she delivered a keynote address.
A two minute video is better than a thousand words and I embedded it here for our readers


Friday, April 05, 2013

Please Help Tim Brown!

Following the Project Dandelion  from my  web site and many posts on the how to employ and maximize the unique gifts of highly functional autism spectrum persons (see Silicon Valley and Autism. A creative approach.) - I receive letters from many people.

Spectrum candidates have qualities not common in the mainstream population. For example, photographic memory, high tolerance for repetitive work, obsession for perfection and a spotless quality in what they deliver

Here is an example of such a letter. I appeal to the entire community on LinkedIn, my contacts and the contacts of my contacts, to help Tim Brown - an Actuarial Analyst -  to get employment. He is based in Atlanta.

I will provide the email and the phone number, on request.
You just received a message via the contact form of your website.
The message was sent from http://www.ahrono.com/contact-us/.

-------------------------------------
Message: Hello, my name is Tim Brown.  I am a 30 year old who is
officially diagnosed with Aspergers Syndrome.  I recently was laid off
for my former job as a Actuarial Analyst and was wondering if you could
help me find jobs for people like me.

I have an MS in Mathematics and a BS in Statistics, so I have the
knowledge to be a data analyst, yet I only have about one year of
professional experience, so it is hard to enter the job market.

If you need a resume to evaluate me further, please let me know.

Name: Tim Brown
-----------------------------------------------------------------------------------------------
Note Added April 11, 2013

According to LinkedIn I had via direct connections and their connections about 200,000 people. Including the expanded connections, I have nearly 5,000,000 people - 2.5% of the entire LinkedIn world wide.

I had no even one inquiry about Tim Brown. Just a fellow parent from Mind Institute tried to help, giving me advise how to advertise on Tim Brown.

She had a series of bullets like "Look for professional organizations that are in your line of work, are there actuarial groups that have meetups?" or  "Be sure you interview well, be prepared to answer the challenging questions" and so one.

I shook my head. People on the autistic spectrum can NOT do all this. Even mainstream job candidates find it hard. and it is not their fault. see this info-graph


If after a job is advertised, in 1 second a position receives 200 applications, and if each day, a number of  new resumes equal to the population of Luxembourg is uploaded to Monster.com, can can Tim Brown compete?

We definitely need one of more organizations dedicated to employing qualified candidates, who happen to  be HFA (Highly Functional Autistic). If Tim was able to graduate with a masters degree in Mathematics and and B.Sc. in Statistics, how come we can not incorporate him in our productive society?

Why Moore Law is thriving at Intel

Stephen S. Pawlowski is Intel’s Senior Fellow and Chief Technology Officer for the Datacenter & Connected Systems Group (DCSG) and General Manager for the Architecture Group & DCSG Pathfinding was interviewed by Nages Sieslack.

Stephen will be at  the ISC'13 as a keynote speaker about Moore's Law and Intel's future designs. Here are eight reason why Moore Law* is well and alive at Intel


  1. The bill of materials, not the price, for just the electronics components of my first PC design, a 25 MHz 80386 based machine was almost $2000, and its performance pales in significance to a cell phone today.
  2. The Human Genome has been mapped and now we can use that information to find cures for diseases that in 1982 were considered unimaginable.
  3. Social networking has become pervasive. One day my daughter ‘texted’ my wife and I not more than five minutes after we left the restaurant we were at to tell us that she had read, on Facebook, what we had just eaten. This would not be unusual, but my wife and I actually don’t do Facebook J.
  4. Tremendous amounts of information are at our fingertips. Encyclopedias Book sets, which were essential when I was a kid, are a thing of the past.
  5. Cameras are everywhere, Pictures are ready to share instantaneously and film is almost a thing of the past!
  6. In 1982, CDs came out in August…I still bought my music on vinyl records and cassette tapes. High definition televisions are now ubiquitous and cheap.
  7. Young people rarely wear wrist watches any more.
  8. Cell phones in 1982? What cell phones?
I would add the annual 25 Petaflops of data from Large Hadon Collider searching for Higgs' particle, which is larger than all of the above.

* Moore Law: The overall processing power for computers will double every two years. 


A comment to HPCwire "HTC, Big Data and God Article"

How to take the wind out of the sails

Post #1 Submitted by wmartinusa on Apr 4, 2013 @ 11:35 PM EDT
Wonderful article. I'd like to make every Big Data salesperson read it and then tell me, what's OSG and HTC? And can you match them? But the attitudes out there don't leave room for the thoughtful reflection this article wants. Ah well, maybe the God particle will leave me an epiphany.
The name "God Particle" for Higgs boson - got another confirmation three weeks ago,  at the Moriond Conference in La Thuile, Italy, from scientists at the Large Hadron Collider's ATLAS and Compact Muon Solenoid experiments. CMS spokesman said:
"The preliminary results with the full 2012 data set are magnificent and to me it is clear that we are dealing with a Higgs boson though we still have a long way to go to know what kind of Higgs boson it is," 
The term "God particle" comes from Nobel Prize-winning physicist Leon Lederman who  wrote the  1993 book   The God Particle: If the Universe Is the Answer, What Is the Question?   You can buy it on Amazon. Dr. Lederman was born in New York. His father operated a hand laundry. He is a one of the top physicists in the world and a former Director at Fermilab, where the search for Higgs' started.

Dr. Leon Lederman, Nobel Prize
Ex Director of Fermilab
Scientists in great majority did not like a name in physics with God into it. See " For Chrissakes, Stop Calling It 'God Particle' ". But everyone else loved it, and the name stayed. Actually not everyone. The atheists and the very religious rejected it. But everyone else, liked it.

You call it epiphany. This is what it is and I hope the Big Data community will pay attention. One can not discover God Particle simply by using commercially backed tools, like Hadoop :) But one day these HTC Big Data methodologies will pay off and reach the mainstream.

Read more




Blog Archive

About Me

My photo

AI and ML for Conversational Economy