The definition of a cloud  is very simple:
1. On demand computing i.e. the ability to get a resource when I need it in matters of minutes.
2. Pay-per use i.e. the ability to pay only for what I use.
The rest is implementation detail. From an end user perspective it doesn't matter that much if I can get the above value from an internal or external cloud.
If the cloud is a box, and I don't know what's inside as long as final result is delivering (1) and (2), we call this cloud.  Everybody assumes all has to be automated by code. Ideally, yes. But in HPC and EDA (Electronic Design Automation) we still need a human intervention to detect bottlenecks and fix them. As a result, the performance literally explodes. There is not yet an automated way to achieve the same result in high end computing.

Some people think if an EDA application runs on AWS, this is a cloud and this MUST be good. It may be a cloud, but this is NOT the only one way of delivering the (1) and (2) from the definition above, and the tools Amazon Web Services offers can not match the performance of a specially build intensive, purpose-build computational farms.

The nature of compute intensive loads does not lend itself to an automatic elastic resource feature, which define a cloud.  We do not know if a performance bottleneck is due to insufficient numbers of licenses, or for lack of additional servers, and which servers in the cluster will solve best the bottleneck.  There are  tools but we need human intervention.”  The commercial distributed resource management software  can easily provide historical resources for billing.

A proto-cloud (my terminology) delivers now when compared to  AWS EDA limited cloud implementations.

Perhaps purists are disappointed. But the customers want the fastest time to market.They can not afford to wait for that pure cloud.


Popular Posts