Sunday, July 27, 2014

Apple Siri, or how to make HPC as easy to use as Google or Dropbox

Chris Dagdigian  - a founding partner at Bioteam - specializes in research computing and infrastructure technology issues in the life sciences. He is multilateral and produced the best training courses for AWS, Cluster Administration Sun Grid Engine and LSF. His work is very creative, like for example Vertex Pharmaceutical VAMPIRE cluster, which replaced an existing Top500 supercomputer. He inspires.

In the article The "Knows" and the "Know-Nots" I quote:
Unfortunately, there is no one type of tech consumer. Someone winds up unhappy. If the design and interface are too technical, novices feel incompetent, shut out and stupid; if the experience is too simple, tech geeks feel insulted and talked down to.
Probably there is no other field as affected by the disarmingly simple discovery that "there is no one type of tech consumer" as the high performance computing . I detected Bioteam messages for the two audiences.

 For Knows-Not: Using Siri Voice for High Performance Computing  BioWorld'12

I happen to learn recently  that Apple Siri team is actively looking for engineers experienced in Grid Engine and other technical performance computing application. The screen cast below, is from Biotem exhibit at BioWorld'12. As stated on the web page,
BioTeam has had this cooking in the lab for a while now but now we can discuss it publicly! We’ve been working with our partners Accelrys (now Biovia) ... to demonstrate what we think is a really cool concept – using Apple Siri on iOS devices to interact directly with a cloud platform (BT Compute) and scientific analysis software 

If you are too technical, you may find this insulting. But surveys shows that at least 95% of all scientists hate the idea of command line, and never worked with clusters and supercomputers.
What is astonishing - and relevant -  is Apple interest in performance computing as their Siri team expands with super-computing developers.

Biovia 's Pipeline Pilot  "rapidly create, test and publish scientific-based services that automate the ability to access, analyze, report and share scientific data". Everything is "rapid", except the access for any trained scientist who is not a programmer or system administrator at the same time

For Knows: 2014 BioIT World - Trends from the trenches - 

I am providing the link to this slides presentation as it has 105 slides and lasts one hour. These are the most recent ideas that Chris presents to an admiring audience of top geeks.

Going through the slides,  some disturbing conclusions are impossible to ignore. The preso is intended for members of tightly knit core of experts, which are above 99%  know-how level of IT literate scientists today.

Every one needs to learn orchestration

So if you are  scientist, a sysadmin, is no longer sufficient. To survive you must learn orchestration. If not your job is in danger. But who wants to learn a such complex and un-delightful skill?

Sysadmin and IT staff need to retrain (if not...)

These skills are NOT widely available. Forcing people to re-train is against the human nature and it creates grief and resistance. Currently these skills are provided by outside companies, like Bioteam or Cycle Computing (who uses Chef with Cycle Server, where all orchestrations are taken care for). The kind of engineers with these skills can not be hired via HR Probably they need an acqui-hire. Food for thought. 
Or see this slide

As if we did not hear that in all other slides, that "sysadmins (Windows and Linux) who can't code will have career issues" Ouch!

My take

Using portable devices to access high performance clusters is the one way to make scientific applications accessible for the estimated 12 million scientists worldwide. 

We want in theory to reach them all, in the same way that laptops and desktops reached to every single one scientist and engineer.

We don't want to create another ivory tower made up of a new breed of super-humans, the so-called "Scientist/Sysadmin/Programmer" species, and make performance computing IaaS and platforms even more difficult to reach. We do not want to sent all IT people to take a Ph.D in orchestration. There is a limited, very limited supply of scientists comparable to Chris Dagdigian or Jason Stowe (from Cycle)

What I prefer

What we want is to create an access platform for high performance application which will be as easy to use as Google or Dropbox. Is this possible? Yes it is. Keep reading this blog.

I know Chris presented equally the options and I thank him for inspiring these thoughts

Post Scriptum

Nobel Prize winner Peter Higgs - calls numerical mathematics, one of the ugliest part of mathematics. See below his interview from Nobel Prize ceremony. We do not want to torture the IT people telling them to learn something even a Nobel Prize (and what Nobel Prize!) detests. There are many people who love numerical maths, let them do it, make money, because this not part of core IT skills.

Friday, July 25, 2014

David magical projects

How do you know what you are meant to fix?
In general, if you have a real need for this thing, then it awaits you to lift it up.
Rebbe Menachem Schneerson

This is a continuation of the blog article My son David built a gaming computer. David and his extraordinary tutor, Scott - who works for  Carol Watilo at Progressive Employment - are presenting David's creations. David built himself everything you see. His  know-how is way deeper than the false impressions we may have when David speaks.

As a tutor, Scott  is strengthened by David's  devotion, while David as a student  feels recharged, renewed and filled with new energy.  Scott enables David to live his life more meaningfully and to confront its disability  hardships. up to the point they both forget about it.

Everything has its limits, even darkness.As the Zohar says, “When the world was made, a limit was set how long it will function in confusion.

Note: All projects are part of the Mechatronics course in Sierra College, Rocklin, CA US . David will complete the course in 2015

Monday, July 21, 2014

The "Knows" and the "Know-Nots" plus Learning from Potatoes

The "Knows" and the "Know-Nots"

Here is quote from a memorable article in Scientific American by David Pogue
We recognize and name many of our demographic tribes: liberal and conservative, wealthy and poor, gay and straight. We classify ourselves that way, we watch out for oppression against one group or another, we pass laws to ensure equal treatment. But when will we recognize the existence of the two different types of technical consumers—the Knows and the Know-Nots?
Over and over again, I run into products that have been tacitly designed for either group. The creators have a mental picture of a product's audience and the users' technical experience. You can tell from terminology, the amount of detail in the instructions, the number of steps required to accomplish anything.
Unfortunately, there is no one type of tech consumer. Someone winds up unhappy. If the design and interface are too technical, novices feel incompetent, shut out and stupid; if the experience is too simple, tech geeks feel insulted and talked down to.
 Probably there is no other field as affected by the disarmingly simple discovery that "there is no one type of tech consumer" as the high performance computing (including high throughput, Big Data and more).

The "Knows"

Most of documentation, papers, users groups in performance computing are for the knows . They set the  tone. I have experience with HTCondor-users groups, Grid Engine groups, Open Science Grid All Hands meetings. There are two types of knows: (1) those who really know and (2) the one who really are FalseKnows, i,e. Knows-Nots mimicking to appear as Knows.

The Know-Nots

In my experience, the Know-Nots are minimum 95% or more in a user population in HPC eager to use powerful resources. Now they feel humiliated, fed up, neglected, frustrated, discriminated to remain second class citizens among the research and engineering professionals.

 This is one of the reason we started the San Francisco High Performance Computing Meetup

Democratizing access to High Performance computing ?

Access? Which access? The Top500 supercomputers, not only are very hard to use, but almost impossible to reach. They are all owned by government, defense, security, spies, academia and a few are privately owned by large companies. They have no public access and are managed based on policies from  from a hidden Castle from a Franz Kafka book, the one you can see and never get there.

The same applies to Open Science Grid. I talked about this subject before , see Make money with Entrepreneurial Performance Computing 

So there is time to switch the focus from very expensive computers no one can access, or large grids using throughput computing administrated by policies. They are all modern Kafkian Castles

Public Cloud and Hybrid Solutions

So before we start simplifying the access to HPC applications per se , lets focus on enabling public IaaS clouds, like AWS, \Azure, RackSpace to run more and more sophisticated HPC/HTC solution

This is happening already

Dealing with Eastman Chemical

In 1998 as part of Genias Software, the company founded by Wolfgang Gentzsch which later became Gridware managed to get Eastman Chemical as a client for CODINE.

See a copy of the newsletter from December 1998

At the time, we did an effort to understand what Eastman does. We wanted to learn, we wanted to please, we wanted to make the extra mile effort. We were the underdogs, LSF was the market leader, and their people had a semi-permanent grin of superiority at each trade show we met. Dr. Bruce Wilson from Eastman , a very able researcher, liked not what we knew, but our desire to know.

What the customers want: Learning from potatoes

This is the secret of Eastman success. They have no set goals like discovering o sort of Higgs particle in twenty years. They want to deliver what the customers wants, now. There is art and magic here.

So the problem is, if you are a geeky Know, sitting at your desk, feeding in numbers, how do you satisfy the desires of a very sophisticated British CEO who needs to deliver special packaging for potatoes?

Friday, July 18, 2014

New! The San Francisco High Performance meetup

Timur Bashirov and I are the co-organizers of the new  San Francisco High Performance meetup
High Performance People getting together to discuss topics in High Performance Computing (HPC). We welcome a variety of topics ranging from "latest web- and cloud- technologies making High Performance Computing more accessible" to "efficient scaling simulation workflows beyond 50,000 cores". Whether you are a web developer, cloud architect, scientist/engineer, a seasoned professional, or just curious you'll find cool people and learn new things at our meetings.
 My focus is on making high performance super computing solutions accessible, easy to use to any of the estimated 12 millions scientists worldwide. Right now super computers are isolated and accessible to merely 50,000 scientists  on the planet - who need a special training tailored for each one of them. No two supercomputers are alike, with some few exceptions.

There shouldn't be necessary to take a Ph.D like training to just access high end performance computing. In its ultimate incarnation, high performance computing would be as easy to use as your laptop today .

Please join us and bring you ideas.

Friday, July 11, 2014

The breakthrough success can only be mystical

From the book " My Rebbe: The Life and Teachings of Menachem M. Schneerson, the Most Influential Rabbi in Modern History" the author Adin Steinsaltz -  the world's most famous Talmudist of our generation - wrote:
The Rebbe once explained to me that many individuals cannot commit to a cause based only on systematic thought and reasoned conclusions. The trend has been, instead, to relate to symbols that are less abstract, and more tangible and emotional. The Rebbe understood that the rampant celebrity worship and the power of “brands” characterize much of society, cutting across social classes.
I posit that the Rebbe believed he could lead his followers to his core principles... History has proved that the Rebbe’s analysis and approach were correct and effective. In other words, the strategy worked – and it continues to work, fueling his mission, even after his passing. Both the Chasidim and others who were initially outsiders began by establishing an emotional bond with the Rebbe. From that bond, they continued further and further ...
 I took this text in the word processor and I replaced "The Rebbe" with "Steve Jobs",  "Chasidim" with "Apple followers", and "individuals" with  "users"
Steve Jobs once explained to me that many users cannot commit to a cause based only on systematic thought and reasoned conclusions. The trend has been, instead, to relate to symbols that are less abstract, and more tangible and emotional. Steve  understood that the rampant celebrity worship and the power of “brands” characterize much of society, cutting across social classes. 
I posit that Steve Jobs believed he could lead his followers to his core principles... History has proved that Steve Jobs’s analysis and approach were correct and effective. In other words, the strategy worked – and it continues to work, fueling his mission, even after his passing. Both the Apple followers and others who were initially outsiders began by establishing an emotional bond with Steve’s vision. From that bond, they continued further and further 
Isn't this truly amazing?

An apple can be bought easily in every supermarket or, better, a farmer's market.

But see what happens when another Rebbe,  The Sanz-Klausenberg Rebbe throws apples to Chasidim after the end of Simchat Torah in his annual "Apple Tisch."

Saturday, July 05, 2014

My son David built a gaming computer

This is the guest blog from my son, David Ahronovitz. As part of the summer program, David built a gaming computer, with the help of Scott Boughton, an extraordinary young man and fellow student at Sierra College, who is his tutor. Scott postponed going on to Brigham Young University to help David graduate in Mechatronics next year. Our family was left speechless,  when we learned his decision.

Scott works for Carole Watilo from Progressive Employment Solutions. Alta California provided the funding.

To fully understand this article, please read The Experiment.

My name is David Ahronovitz. During my summer 2014, I built my first computer. I named my computer Enermax. I gave its name because I found this on the computer case I used. The first motherboard that I tried to use was an Asus A55BM-E Motherboard with a MicroATX form factor. This motherboard was chosen because I originally planned on upgrading my 6 year old Lenovo K210. This old computer used the MicroATX form factor. However, I decided to build a completely new computer using the new motherboard and an Enermax case.

To begin building, I installed the processor. I used an AMD A10-7700k APU with Radeon R7 graphics. The clock speed for this APU is 3.4 GHz. I was careful not to touch the gold on the bottom as I placed it on the motherboard. I added heat sink compound to the top of the processor. I tried to attach the APU fan and it didn’t fit on the motherboard. I tried multiple aftermarket APU Coolers but none of them fit the MicroATX form factor of the motherboard. I just gave up and used the APU cooler that came with my processor and this finally worked.

The next part I installed was the DRAM. The type of DRAM that I used was DDR3. I installed two 8GB modules for a total of 16GB. I installed the power supply into the computer case. I used a 550 Watt power supply. The power supply fit perfectly into the case and I secured it with a few screws. After the power supply was installed, I placed the motherboard into the case. Once the motherboard was secure, I plugged the power supply cables to the APU and the main motherboard power connections.

The computer had the basic components to run POST at this time. I connected the front panel wires so I could operate the computer and I turned the computer on. The computer passed the POST sequence. Since the basic configuration worked I added the Blu-ray drive, Graphics card, Wi-Fi adaptor, and the hard drive. I rebooted the computer and I started to install window 7. Everything work correctly except my motherboard did not support USB 3.0. I wanted a USB 3.0 so I decided to get a different motherboard.

The new motherboard I bought was an MSI A88X-G45 GAMING motherboard. With this new motherboard, I also purchased a new graphics card the MSI GFX 750 TI. This new motherboard supports up to 8 USB 3.0 ports and can use up to three graphics cards. It is also designed to be overclocked to make it faster. I have not used these features yet but this will allow me to upgrade in the future. I removed the MicroATX motherboard that was in the case and replaced it with the bigger and better MSI motherboard. I reconnected all the cables and hardware, and then I turned the computer on. The computer ran for a few minutes but during the updating processes it shut down and would not POST.

 I called MSI support and after talking to them they told me to return the motherboard. I removed the motherboard and exchanged it for a new one. I reinstalled the new motherboard and it passed POST but would not load the operating system. I discovered that there was a bad cable between the motherboard and the hard drive. I replaced the SATA cable with a new one and the computer worked. After about 20 hours, a few trips to FRYS, and great support from my parents and mentor I can finally say, “I built my own computer!”

After I put the computer in my room I bought some new speakers for it. I got the Logitech speaker system Z623. These speakers provide a nice sound and I feel like I’m at the movie theater. Since my computer was a gaming computer I also bought a gaming mouse and keyboard. These were also from Logitech. The keyboard is black and the lights are red. The keyboard is a G510s and the mouse is a G400s. Both of these came with software to help them be game specific. I played a star wars game to test them and they work great.  

Thursday, July 03, 2014

Make money with Entrepreneurial Performance Computing

A shocking discovery

The supercomputers and very large clusters have such limited access that at least 99.5% of the engineers and scientists in the world have no access to these applications. Power simulations are an essential tool to create wealth in our society. Right now is a grossly underutilized tool.

In this blog I propose to create a portal for performance computing applications using recent developments on web services for science applications. This can only be successful as an entrepreneurial effort financed for profit Silicon Valley and San Francisco style

I also propose - again -  to create incubators focused on performance computing operating in all Universities and research organization currently funded by NSF, DOE, Health Department, Defense and National Security.

I hope this is the right way to reach more than 90% of researchers and engineers, up from a meager 0.5%

I use this blog to to create awareness and I hope investors and interested institutions will contact us to help start

Product and Services know-how in High Performance / High Throughput Computing

It is not here. It's lacking.

A search for the keyword "HPC" on this blog lists thirty eight entries. One article is relevant to this title: Why HPC TOP500 never made any money and never will in its' present shape . Another blog entry relevant is New Ideas for starting any business seeded while attending supercomputing events.

In these blogs I discovered that  any product and service in a startup must have clear descriptions of the people who want them. We have no idea who are the people who want performance computing by application.

Figure 1: How to make products in 21st century

As per illustration above, if we want to make people want our "things", we don't care about who they are. We assume they will flock in admiration to some giga-dinosaur from  TOP500 list June 3 2014
Tianhe-2, a supercomputer developed by China’s National University of Defense Technology, has retained its position as the world’s No. 1 system with a performance of 33.86 Pflop/s (quadrillions of calculations per second) on the Linpack benchmark, according to the 43rd edition of the twice-yearly TOP500 list of the world’s most powerful supercomputers.
This is an exotic animal people see in zoos and no one wants it at home. I don't even know what it does exactly. We don't know how much it costs, but it is not for sale anyway. We don't know who users are, what they are doing and what is their goal in life.

Tianhe-2 retains its TOP500 leadership for three years in a row. This, as far as I know - it has never happened before. It seems the research and industry lost interest in creating supercomputers with more and more quadrillions of calculations per second, consuming power costing  hundreds of millions of dollars in projected lifetime.

How many researchers (engineers and scientists) are worldwide?

A recent discussion (November 2013) on ResearchGATE estimates about 10 to 12 million scientists worldwide 
In Polish universities and other scientific institutions ResearchGATE is not as popular as in other countries. So if we assume that the global number of people who have a profile on ResearchGATE medium reaches 35-40 %, it can be assumed, that the "global market" there are about 10-12 million people in the "scientific sector".
According a a February 2014 US Congressional Research Service report
In 2012, there were 6.2 million scientists and engineers (as defined in this report) employed in the United States, accounting for 4.8% of total U.S. employment. 
56% are in computer related and 25% are engineers. As US has between 40% to 50% of the world's researchers, the two data sources confirm each other.

What type of people use a supercomputer?

The common answer is:  Super People 
Actually, physicists, meteorologists, global warming people, etc. Anyone modeling complex behaviors that they can describe mathematically can use supercomputers to simulate the interaction of those behaviors. 
Also - engineers will be using them to model parts and whole cars without having to actually make them first. They can model a car's performance, strength, etc on computers without having to actually make the car first. Design it on computers, model its performance. then build it. It used to be a whole lot of trial and error to get things to work right. Now they figure it all out on the computer beforehand, and build it straight from the computer specs...

How many users ("Super People") for TOP500 ?

These lists are not published for TOP500. I learned from my very knowledgeable friends: 
HPC systems in industry are normally running application packages from ISVs and these applications are far away from reaching the peak performance of the HPC systems they are running for. The reason is that solving a specific problem with an ISV package does in general not produce long full matrices, as they are required for LINPACK.
The focus is on  the FLOPs rates that can be reached for a given HPC system under optimal conditions. The practical utility is irrelevant at these levels

Let me assume that each supercomputer listed on TOP500 has 100 users. In this case the total number of users could be 50,000 researchers and scientists who directly work with a supercomputer of enormous clusters.

This number is probably a maximum , because the access to supercomputers or on large Grids like Open Science Grid is restricted via policies and other arbitrary ways. For example 
every six months, Lawrence Livermore National Laboratory gets around 20 to 25 proposals from different national laboratories and accepts around 10 of them. At any given time there are usually one to four projects using the supercomputer. Priority is given to whatever project is deemed most important

How many people use simulations?

Wolfgang Gentzsch  research based on the sales of workstations and  PC dedicated to simulation  estimates about 20 million simulation users worldwide. They are limited to their workstations or very small grids 

The access to Supercomputer and large cluster based applications is not pleasant. See TACC XSEDE Manage Permissions with Access Control Lists . There is no way to pay for access as this is government. Getting access as a private company is probably as complicated as paying to go in a NASA space mission

99.5% of the researchers have no access to supercomputers

We have 11 millions scientists per ResearchGATE estimates and only 50,000 supercomputers users. This means roughly 0.5% can use this wonderful applications

99.8% of  simulation users don't have access to a supercomputer

This is a simple calculation using the estimated 20 million estimated workstations simulation users versus the estimate 50,000 users in supercomputer.

Even we have 500,000 TOP500 users, the conclusions will be the same

Do we need simulations in industry?

Yes, we badly need them, but the simple  calculations above show we can not deliver the benefits of compute intensive applications and simulations where they are most needed. Not in Academia, but in Industry.

According  to Chicago Tribune:
SWD is one of 10 small and midsize manufacturing firms offered the opportunity to work with the previously announced Illinois Manufacturing Lab during its initial launch.
The laboratory is an initiative of Gov. Pat Quinn propped up by $5 million in seed funding...
"This is a resource that typically small companies, even if they could pay for computing time, wouldn't know exactly how to use it," said Caralynn Nowinski, UI Labs executive director and chief operating officer. "Many companies don't even know how to use modeling and simulation in their design."
This is an exceptional initiative, but one swallow does not make a summer. We need a continuous flow of entrepreneurial initiatives

Entrepreneurship in supercomputer and big cluster applications

I explained in a previous blog how the incubators work. They may offer a student or group of students and/or researchers $15,000 to 25,000 for 6% equity. You may read How to Evaluate an Offer from a Startup Incubator, This money is used to create a solid proposal to venture capital firm or other investors.

While at Open Science Grid I brought up this idea. The same at University of Wisconsin CHTC. There are many projects funded by NSF or DOE of say $3 million. Some other projects have as much 22 million in funding.

Assume there is an incubator designed to create businesses from performance computing applications. With $3 million investment , one can fund between 150 to 200 startups. Assume 90% fail, we still have a minimum of 15 successful startup with a combined value many times over the initial  $3M investment.

No one listened,  yet. I keep my hope alive

I want to stress this point. In HPC  the predominant belief among researchers is that no venture capital will ever select an HPC project, when they make so much money in any trivial social application that catches attention.

This has to change. When we buy a GPS device, we don't pay a monthly fee or pay a tax for GPS support. We only pay the price of the device. US Government made GPS a free service worldwide.

Paraphrasing Paul Krugman from New York Times, it seems our educational system for high performance computing makes everything possible to go back to "patrimonial capitalism" where the status quo matters more than effort and talent." This encourages students to stay employees for the rest of their lives. It thwarts any entrepreneurial thought. Students may believe there are no other places to do their work, and that wealth is inherited, not earned, at least in HTC/HPC.

Probably if they want more freedom and money, the only way is to join a social network software company and forget about performance computing. One acquaintance said he wanted to erase the HPC experience from his resume, because this makes him unemployable. .


  1. Virtually no engineers and scientists have significant access to high performance computing applications
  2. Virtually no supercomputer engineers know only vaguely who are or who might be the users of the applications they create
  3. Scientists and engineers want to access supercomputer applications as easily as their desktops. 
  4. Research has shown they do not care what is behind the GUI (Infrastructure, command line, scripts, and all the messy stuff.
  5. Mainstream industry and supercomputers almost don't intersect
  6. More human scientists  per project is far more vital than more cores and and other in-animated resources

Possible Solutions: Science on Demand

Recent developments made possible the usage of web services to make complex applications available via web interfaces everyone who has a PC or Mac is familiar with

Since 2010, The NERSC Web Toolkit (NEWT) brings High Performance Computing (HPC) to the web through easy to write web applications.

Using  TACC developed API, we have Agave API "service you have access to over 600 of today's top plant biology applications on the latest HPC and Cloud systems"

These developments have been presented in academia, and incorporated a new term, Science as a Service included in a presentation by Ian Foster How on-demand computing can accelerate discovery

All this work is ready to be commercialized for a much wider audience than just science. Academia has a different mission. Rarely NERSC NEWT mentions Agave, or Agave, NEWT. The two systems are not yet ready for mainstream.

Startups can make HPC and HTC applications mainstream. like in San Francisco decided to offer complex materials simulations "as-a-service", "on-demand" for everyone. The company hopes to recruit at least 1,000 more scientists who never used a supercomputer before.

The Performance Computing Portal

What I propose is this

Figure 2: How to increase access to performance apps from 0.5% to perhaps 90%  for engineers, researchers and scientists

Each application on the portal will have
  • A description of what it does
  • Expertise level
  • who may benefit from using the app
  • What a user may achieve
  • A discussion group
  • A rating system (one to five stars)
  • Prices for on-demand usage
  • ... more according the fedback after launch

Similar portals as inspiration

Have a look at Product Hunt "the best new products, every day"
Figure 3: Startup expose their application on Product Hunt portal.. Investors watch, peers comments and vote,
Figure 4: The Comments Screen for the jobbox application
Note that we have some trivial applications - which does not mean they could not be successful - like Headspace that teaches you how to meditate. The portal is like a farmers market for small startups or would be startups.

Ubercloud Marketplace

It offers  computing as a service, for professional engineering and scientific simulation projects. It is an exciting development that creates capabilities for HPC in organizations that lack resources in house. But it does not offer the instant gratification that Product Hunt portal does. I know. HPC is bit more than just a short guide on how to meditate.

Interest from IaaS public providers

There is no coincidence that the winners in the 2014 Gartner Cloud IaaS magic quadrant are also the best infrastructures ready for HPC applications. Amazon Web Services is one
Microsoft acquisition of GreenButton  shows how serious Microsoft Azure cloud IaaS is about performance computing applications... This create new, unlimited opportunities for entrepreneurs in HPC / HTC, 
Amazon and Azure and others will consolidate their leadership in IaaS cloud services for HPC/HTC applications. These applications are most resource intensive and have a significant potential for revenues

Bottom Line

We are ready to boogie. I want to thank all underpaid and not recognized yet extraordinary HPC / HTC engineers, researchers and scientists who generated these ideas in my head. My entire career was to act as if I were a talent agent for exceptional underdogs. This is my message to them. Don't leave. Your time has come.

Blog Archive

About Me

My photo

AI and ML for Conversational Economy