Friday, November 19, 2010

Amazon and the extinct dinosaurs

Here is my tweet on this subject
  rank 230 breaks a new era in It has billing on demand All entries must disclose $/yr/Rmax, or be extinct dinosaurs
The point I wanted to make is in HPC, a price of $ 1.60 per node per hour is deceiving. To make an HPC configuration able to get on #230 on TOP500,we need so many nodes like never before, and pricing was not devised for these kind of volumes. The 25M assumes 100% usage, which is unrealistic and assumes on-demand prices, when one can use reserved instances pricing. This combined effect of lower utilization and the usage of reserves instances may reduce the cost to $6M to $8M, IMHO

But in Top 500, the annual operating cost are mind boggling. Here are the example of the cost of  supercomputers (back in 2005)

As you see the combined cost of The Blue Gene/L (Rank 1) Supercomputer together with the ASC Purple (Rank 3) cost $290 million. If take a 5 year depreciation , that $58M per year. Add support Sysadmin and power and rent, at least 40% more, that is $81 per year to operate. Sure the Rmax for this computer is higher than Amazon, but Amazon @ say $8M per year in this context is 90% cheaper

If Amazon does something well, the best, is billing. As soon anyone will use AWS as a supercomputer,we will figure out very easily the exact total cost of an AWS supercomputer.

So AWS will do for Supercomputing, what it does now for the the Data Center, e which are transforming gradually in clouds. It forces private clouds to do metering and billing to justify themselves. Otherwise any DC risks to be des-intermediated and use hosting cloud companies A supercomputer is nothing but a huge private cloud, and now as AWS entered their territory, all this gigantic , dinosaur-size, TOP500 supercomputers will need to justify themselves economically. This is a real nightmare for the hard core in this industry, but it also will mean the democratic access to supercomputing applications we always wanted to achieve.

Amazon will soon improve the pricing to make CCI and Cluster GPU instances economically feasible on large scale. Knowing how amazingly they respond, I predict soon the special pricing will be in place to supercomputing even more affordable day by day.

Friday, November 05, 2010

Cisco clouds : A conversation with a friend

We met at the  Avalon Club party as part of the CloudExpo 2010. We chatted under British heavy rock louder than ever and eating some seared tuna, salmon with wine.

Your large organization wants a private cloud with all data local and secure. You have a very large data base and growing
I promised to share with you  what I discovered myself and maybe all this people will attend the next Cloud Expo event.
This is what I dug out in the last 6 weeks

Cloudera has a seminar on how to integrate Hadoop with DB and BI applications

Anil Madan, eBay, Director of Engineering, Analytics Product Development  leads the team at eBay that is leveraging its data assets to do advanced insights and analytics
Jeff Hammerbacher, Cloudera, Founder and Chief Scientist conceived, built, and led the Data team at Facebook. The Data team was responsible for inventing and building powerful data analysis applications on Hadoop

Here are  some notes from A NoSQL Evening in Palo Alto

See Wikipedia for a good definition of NoSQL
See here the Dynamo paper from Amazon AWS
See Werner Vogel blog about this paper and link to the pdf text

There a couple startups developing products based on Dynamo
like  Basho Riak
A few words for  membase used extensively at

Membase, is a "NoSQL" database optimized for storing web applications.
Membase was developed by Zynga, NorthScale, and NHN, and its source code
 released as open source in June of this year.

I don't want to overwhelm you with non ACID scalable cloud new fledgling
companies and geeks Yet it is nice to know they exist

You asked a very good question: where o I go to have the complete cloud solution delivered and supported? Cloud Expo we had quite a number of product companies, but no one with total solution, including professional services.

Cisco Systems has an interesting approach.
You see an architecture where the cloud compuring is simple an overlay on the Unuified Service delivery. Cisco identified heir market for cloud as the service providers niche, and it is estimated at $49B  per year in 2013. Cisco Believes:
 First, we see our partner ecosystem as central to Data Center Business Advantage.  We have a world-class stable of development partners we work with for solution development (everyone from Intel to VMware to SAP to name a few).  The fact that we have these open solutions stack is one of the advantages of Data Center Business Advantage–it gives customers flexibility and choice and allows them to work with the favorite vendors.
The second thing worth pointing out is the critical role of professional services.  Whether it is services from Cisco Services, from one of our channel partners, or from a blend of the two, services are going to be fundamental to any successful data center strategy.
So this is all about services. One cannot sell a cloud via one product only.

See also this flyer

Please note that Cisco has an ecosystem of 40,000 partners!!!. I saw a Webex talk at Cloud Expo. The speaker said, before they were acquired  by Cisco, they sold 95% directly and 5% via partners. In Cisco is the other way around. They sell 95% via partners. Even is only 10% of the partners resell WebEx (Gartner is such a customer), this means they have to test the implementations for 4,000 different partners!

Dell also acquired Boomi, as they offered integration services, but this seems more opportunistic rather a clear vision for cloud offerings

A large private cloud implementation offers a few options

1.      Buy one or two "cloud" products and then System Management is left to deal with the complexity and responsibility. It may mean many sleepless nights and risk, which a sysadmin administrators are not paid to take them

2.      Get informed on what is going in the startup technology  and specify a solution with the realm of reality, in addition to what big boys offer for sale. Get some consultants to help navigate the seas

Specify a contract and interview some players who provide all services. These players should have the ability to partner with some leading edge start ups

#2 and #3 may seem more complicated, but are by far more convenient. Also in designing the  private cloud, only the absolute minimum features (MMFs) -  the ones required to make happy your users base - should be considered. This will keep both costs down and will deliver the cloud in record time.

My recommendation for Cisco is not based on my experience with them But they have a realistic goal. They want to help all Service Providers to move to cloud and this is a real, $49B market by 2013, making room for by  far bigger revenues  than any exhibitor at Cloud Expo will ever have.  Also I like their common platform for Unified Services Delivery, plus their understanding of Internet’s future traffic are great assets. According to Cisco,  video will exceed 91% of consumer IP traffic, Mobile data will grow nearly 4,000% and cloud traffic grows nearly 300%

This will require new levels of integration between the Network and the core - cloud Data Center plus  GPS-precision in  resource location at multiple sites

We want our customers to reach us easily from everywhere, and we want the data to be available fast and without degradation. Then we want to re-use as many resources as possible
In my humble opinion, this is not available off the shelf. But no matter what the budget is, one can try to start implementing a proof of concept and then deploy in production.

Oracle suggest we buy and Exalogic server appliance and then we drop it in Data Center and we can handle all the traffic from or Facebook. I love the idea but I wonder whether it is not too beautiful to be true :-)

Blog Archive

About Me

My photo

AI and ML for Conversational Economy