Tuesday, February 24, 2015

Revolutionary Cloud Computing e-books from Tim Chou

If you don't understand cloud computing, you are not alone

Let's assume you are not a developer.. You don't know what the words DevOps means. You have heard about containers, Docker. You have heard about Cloud Computing. You have heard of AWS, Azure, Rackspace. You hear every single large company from IBM to Ericsson planning to offer cloud products and services

Although you may be ignored and relegated to shut up at any Docker meetup, you are a very important person. If you can understand cloud computing, you may become an advocate. You may become a customer for cloud. You may be an influencer for future IT strategic decisions, as C level executive or a venture investor..

As the end, you may be the one who has the budget to fund,  hire and pay best the very engineers and coders, who are talking at a meetup where you feel you don't belong.

This is an expanded explanation of what I wrote in my previous blog Enterprise Cloud Adoption. Why is so slow? 
The great majority of organizations did not adopt the cloud yet, because there is still a challenge to understand what cloud is.  All we have is a cacophony of messages. there is absolutely no one who can implement a cloud in an enterprise as easy as 1, 2, 3
You are not alone

Who is Timothy Chou?

When writing the previous blog, I discovered Timothy Chou. Vince Vasquez , Cloudbook CEO introduced Timothy's new e-books about Cloud .

Timothy  began working for Larry Ellison as President of Oracle On Demand in 1999. long before the word cloud was in vogue. Oracle may be a controversial company, but what they do well is maximizing revenues through solid business models.After  leaving Oracle he returned to Stanford University, where he taught introductory computer architecture for 15 years, and started the first class on cloud computing. Michael Dell follows him on twitter.

Timothy is the man we were waiting for: He explains" the fundamentals of cloud computing, without using fancy, technical buzzwords and by incorporating plenty of real world examples. It begins by describing the seven fundamental software business models at play and then describes each layer of cloud computing."


Fig. 1: Model Seven from Cloud Fundamentals, by Timothy Chou

The Model 7 is not similar to AWS, or Rackspace:
Model Seven is the model in which all consumer application cloud services live. ...
In other words, I don’t charge you directly for the usage of Amazon.com or eBay or Twitter or Facebook or Google. Instead I monetize with ads, which is obviously the case with Google and many others. I can also monetize with transactions like eBay and PayPal. Every time you buy a book at Amazon you can think of it as inverted meaning because you’re actually paying some part of that to use the Amazon.com software.
All of these have been developed purely as cloud services
This outlook for understanding the cloud business model is brilliant. It can create habits and the desire in customers to pay significant amounts of money. It is easy to understand.

What is wrong with AWS?

Nothing for Amazon itself. But the new customers who came there because they are not willing to learn the labyrinthine AWS secrets can become an easy prey.

Don't even try to explore AWS pricing system . GIGAOM RESEARCH has a research report recommending a Gravitant a company offering products and services just to Get a grip on AWS pricing changes

The blogger Nir Eyal explains how can you overcharge for services via UIs  using a subtle psychology.
First, users tend to take the easiest route; they do whatever requires the least amount of physical and cognitive effort. ... Picking a preloaded amount is simply easier 
 These systems also make it easier for customers to let go of their money. In another sense, they eliminate what Duke Professor Dan Ariely calls the pain of paying. Ariely states, “The agony of parting with our money has to do with the saliency of [seeing] this money going away.” In other words, the less real money feels, the less painful it is to spend and subsequently, we spend more of it.

 The rule of thumb

According to Professor Chou
... the cost to manage software is about four times the purchase price of that software per year. If you pay $4,000 for the software, you’re going to spend $16,000 per user per year to manage the security, availability, performance, and change management required for all the infrastructure running under the application.
.... For those of you who are having a hard time believing this, just go talk to any CIO of any major corporation and they’ll tell you they spend 80-85 percent of their budget managing the existing portfolio of applications and software

Where to buy the new e-books


  1. CLOUD COMPUTING: FUNDAMENTALS
  2. CLOUD COMPUTING: OPERATION EFFICIENCY
  3. CLOUD COMPUTING: TRANSFORMATION
I like the format of the e-books
Fig. 2: Watch, Read Tell
for each topic, you can watch a video average 3 minutes  to 5 minutes maximum. You can read the text. Or you can use the slides to tell the story to others.

Perhaps these e-books are not perfect. But they are the best I have seen so far. It gives you the basis to go to a Cloud developers meetup, stand up and ask them:

"Excuse me Sir. This is nice, but what is your business model?"
Fig 3: What goes in to create revenues? This is the question.

Thursday, February 19, 2015

Enterprise Cloud Adoption. Why is so slow?

I just could not resist, and I added this gem produced on twitter  by Simon Wardley
There is a moral to this story.The great majority of organizations did not adopt the cloud yet, because there is still a challenge to understand what cloud is.  All we have is a cacophony of messages. there is absolutely no one who can implement a cloud in an enterprise as easy as 1, 2, 3.

At the beginning, we envisaged the cloud as a plug in the wall, something that will bring on demand , all computing resources as easy as electricity.
In reality we need developers (the fancy name is DevOps) to create a cloud, in other words we need the geeks, and we depend on them to operate our cloud, We dread, what happens when they leave.

I always hoped the Telecom are best positioned to deliver the enterprise cloud, or any cloud, The problem is not technical, but political. There is a cultural chasm between the telecom culture and cloud culture.

A story

Here is a quote from a recent (February 17, 2015) Cisco blog by Timothy Chou
While cloud computing is based on a number of technology innovations, I’m going to write for the non-technical person who I think needs to understand this major shift.  In the end, cloud computing will affect every business, every industry.  I’ll start this blog by sharing a story.
A few years ago, I was in a meeting with six CIOs of one of the largest healthcare providers. I asked each a question as they introduced themselves: “What are you working on?”
The first CIO, Bill, replied, “I’m working on a strategy to move to cloud.
Next, I asked Mary, “What do you do?” Mary also said she was working on a strategy to move the cloud.
We got through every one of them and every one of them had the same answer.
I asked, “So what does that mean, working on a strategy to move to the cloud?”
They collectively said, “We’re really not sure, but we’re working on it.




Monday, February 16, 2015

The memorable talk of Dr. Horst Simon at HPC San Francisco Meetup.

On February 10, 2015, Dr. Horst Simon spoke at the San Francisco High Performance Computing Meetup.  His presentation video, Usable Exascale and Beyond Moore's Law is embedded below.

This video is viral. I see it already on LinkedIn (including Horst page), on HPCwire. How come  the "penniless"  HPC Meetup in San Francisco managed to produce a professional video?

What is so special about it?  Dr. Simon is one of the biggest names in supercomputing, a star for events like SC15 and ISC High Performance 2015.  He is one the founders of the TOP500 list. It ranks the most powerful computers in the world in terms of floating point operations per second.
Screen shot from top500.org home page
While I introduced the speaker to the meetup audience, I asked two questions:
  1. How many of you heard of Dr. Simon before?
  2. How many of you heard of TOP500 before?
From an audience of 42 people, only three or four hands went up for each question. It means 90% of people attending belong to a different world than the academic and super computing events circuit. Can you imagine someone  going to Austin, TX in November 2015 for SC15, and asking innocently "What is TOP500 and why do we need  it?"

The meetup audience in San Francisco, made up  not  from researchers in government, but hpc adventurers  looking for inspiration to create new businesses based on HPC or simply getting some decent paying jobs with organizations that make money and can afford to pay for knowledge. His presentation made the audience feel "clever", he had zero aloofness typical of some haughty professors and charming, engaging, honest style in sharing what he knows and what he doesn't equally.

You will watch the presentation and make your own opinions.

Horst is not talking of exabytes in storage. The entire world storage capacity is estimated as 1,200 exabytes in 2014. Horst is talking of a supercomputer capable to process one exaflops (flop is from floating point operations per second ) as peak performance. Here is the mind boggling exaflop definition:

one exa floating point operations per seconds = 1 000 000 000 000 000 000 flops = 1 000 petaflops = 1 million teraflops = 1 billion gigaflops

He made a bet of two thousands dollars with Thomas Lippert, - the director of the Julich Supercomputing Center in Germany - that Exabyte computer using CMOS technology will NOT exist before 2020..

The TOP500 data shows clearly we are approaching the end of an era.  The Moore Law will no longer be applicable. But two new technologies, 3D integrated circuits and silicon photonics will create more flops per units of power consumption and will delay the end of the Moore Law. However, in Dr. Simon's opinion,-  this will not happen  sooner than 2020.

I like the exoticism of such a term: "non-von Neumann" architecture. But what it means is that the original architecture devised in 1952 by  John von Neumann (his real name is Neumann János Lajos) and Julius Robert Oppenheimer, in order to solve physics problems, is still valid today and it used by every enterprise, cloud computing and social network data center. This is where Moore's Law applies

What if Peter Thiel and Elon Musk were attending this meetup?

They were not, But I like to think that some future entrepreneurs were sitting in this meetup who will be as famous  as Peter and Elon. And they reach fame, they can give Horst the credit of mentoring them.

Here are two ideas that fit the "zero-to-one" companies as described in by Peter Thiel.book with the same title.

A brain computer

It is possible to built a supercomputer that will have the same power to process as the human brain, Such computer will have between 1 to 10 exaflops and 4 petabytes of memory. It will consume 20 to 30 megawats of power which is one million times more than the brain itself. Such a computer will not reach consciousness, but will process the electrical signals of the brain with the same speed.

Peter Thiel co-founded Palantir one of the most successful big data companies in the world. Following Oppenheimer and von Neumann advise,  - the architecture must follow the applications being run -   such a hardware can complement what Palantir does. The idea of the Brain Computer is not new and there are about six projects in the world trying to implement it. But what is new, - blame me for that - is the association with Palantir. To make a hardware specially for their super-secret applications that one day will benefit the rest of us.

Time magazine today has a cover story with a cute baby, that says: This baby could live to be be 142 years old." If this prophecy turns true, is because the Brain Computer will be built.

A formidable face recognition system

The IBM SyNAPSE project lead by D. Modha is part of the cognitive computing
IBM built a new chip with a brain-inspired computer architecture powered by an unprecedented 1 million neurons and 256 million synapses. It is the largest chip IBM has ever built at 5.4 billion transistors, and has an on-chip network of 4,096 neurosynaptic cores. Yet, it only consumes 70mW during real-time operation — orders of magnitude less energy than traditional chips.
IBM Brain Infographic

Horst gave us an idea. What a cell phone based this technology able to google every face and tell you everything about that person. Horst gave as example of going to a conference, someone salutes you, you salute but you don't have a clue of who this person might be.

I go a bit further and I imagine someone going to a party and the phone identifies the third husband of his wife. But all the defense organizations and intelligence agencies in the world will have a much richer imagination

Saturday, February 07, 2015

Deconstructing Joyent's latest container technology

The word "deconstruction" reminds me of the 1997 Woody Allen movie Deconstructing Harry. Merriam Webster defines "deconstruction" as
a theory used in the study of literature or philosophy which says that a piece of writing does not have just one meaning and that the meaning depends on the reader 
Here, I am the reader of Bryan Cantrill, Joyent star CTO. There is more than one meaning. What does it mean for me, the blogger? This post is a personal perception from an outsider.

For easy reference, I embedded Bryan Cantrill's presentation from the Docker Meetup in Seattle WA. January 29, 2015.  I watched the video three times. I am not a developer. I am not a sysadmin.  But throughout my entire career, I worked with top engineers, thinkers and entrepreneurs.



I compared Bryan style to Nikola Tesla.  Bryan's idea of containers is a continuous, mental prototype that is always in a virtual state,

Bryan made me "instinctively step beyond the limits of the small world of which I had knowledge, and I saw new things."

Virtualization stemmed from the idea of clouds, but more precise is the idea of utility computing. This started with grids, in High performance Computing. See here a memo dated February 22, 2003 proposing the "Sun Unit of Computer Power (SUNCOP)" to Sun's CEO.

I learned from Bryan that hardware virtualization limits tenantcy, elasticity and performance because each software virtual machine mimics hardware and has one OS at the bottom - also called hypervisor - and one OS inside. The OS at the bottom knows nothing of the apps running, and OS inside can not allocate DRAM, CPUs and other resources.

Similarly,  the application level virtualization makes some "toy" applications easy for developers, but places "serious" developers of sophisticated apps in a "straitjacket." Bryan calls this the “Google App Engine" problem. No good, he says.

Bryan is a master of figures of speech, irony, and metaphors which he uses to create horror or delight his mesmerized audience.

Finally, the best solution was to virtualize the OS. Joyent then created the SmartOS that has Zones, ZFS, DTrace, Crossbow and more.

The Docker revolution came across. Docker runs Linux, and SmartOS is Unix. This created the classic Shakespearean dilemma:
To be, or not to be: Could we somehow natively emulate Linux and run Linux binaries directly on the SmartOS kernel?
This was the question. Many people said: Impossible. But it became possible through a spectacular breakthrough of two engineers: David MacKay from Illumos community and Jerry Jelinek from Joyent
  I dedicate this fortune cockie to Joyent engineers and to David MacKay
 Bryan ending remarks are:
We believe that the developer ease of Docker combined with the proven production substrate of SmartOS and SmartDataCenter yields the best of all worlds • The future of containers is one without compromise: developer efficiency, operational elasticity, multi-tenant security and on-the-metal performance!

What impressed me most at the end

  1. The ability to run Docker on SmartOS
  2. The ability to run Linux applications directly on Joyent without the need of porting
#1 made possible #2, which is huge. This seems to eliminate the biggest hurdle Joyent faced in the past for adopting new customers, because the world today is a Linux world.

Bryan said in the video this is perhaps the most important demo of his career. I think it is

The devil is in the the details

I see the ads of Joyent all over: 
Real Time Cloud, ZFS, KVM, DTrace. Deploy now!
Deploy now, what?

I know what ZFS is, because I worked in Sun. Probably most developers know what ZFS is. Or do they? Do people on the street know what ZFS is? I suppose we don't care of the people on street when talking containers. This is not wise. All big potential investors and large customers are not sysadmins, or developers. And some venture companies trust more their instincts, because VCs  are not developers or DevOps themselves

ZFS means: "You don't care about physical disk drives. The data will be always there for you."

But the best explanation of ZFS comes from a show Sun staged in the video Ultimate ZFS Overview which takes 1 minute to make the point. They destroy a hard disk and no data is lost.



A simple show like that made everyone see ZFS as indestructible storage.

Assume that the April 2015 release of the new Joyent Container Services is delivered successfully: how will  Joyent attract Docker developers and sysadmins?

Here is the tweet exchange from Randy Bias with Bryan Cantrill
Making Joyent a winner will require a mobilization of all marketing strategies, User Experience (UX)  focus, alliances and shrewd strategy from a CEO magic wand that makes it all happen in sync.

So what would I do?  I would test Joyent Container services by recruiting beta testers from the bottom (not the top,  not the middle, the bottom) 25% of sysdamins and aspiring developers. They  are the ones who just started their 1st job or they are looking for a job.

If they like it, then everyone will like it.

Competition

Joyent is not the only player. Many offer Docker orchestration.

We have: (1)  Fig container orchestration service that Docker bought,  (2) the Google’s Kubernetes  (3) the new Amazon EC2 Container service. VMware is in it, as it teams with Docker, Google and Pivotal "to simplify the adoption of containers".  Giant Swarm is in it, IBM is in it.

Forrester adds to the list above HP, Red Hat, CenturyLink and of course, Microsoft

Tutum is an elegant player and they just introduced in release 0.11.5 persistent storage. Then CoreOS wants to disrupt and claim they can run Docker too. But this is another conversation.

My summary

I separate the players in the Docker container market into two groups:
  1. Big Boys:  They have lots and lots of customers and are afraid to lose them (AWS, IBM, HP, Microsoft, Google, VMware).
  2. Young Boys: They are the ones who do not yet have a critical mass of Docker customers, but are lean and hungry. They are best equipped to offer personal, easy-to-use services to run containers. (Docker, Joyent, Tutum, Giant Swarm and so on). Their goal is to partner or simply steal disaffected customers from the Big Boys
 The Big Boys will pay tons of money for the successful Young Boys.
My theory is simple. The winner will be the one who is the easiest to use and gives fastest gratification for the user / developer.

Joyent has a tradition of making the impossible possible. So there is a second fortune cookie, that fits their momentum:

Sunday, February 01, 2015

Tutum is set to dockerize the Enterprise.


Read Quote of Marc Andreessen's answer to How big a factor is luck in startup success? on Quora

Bryan Lee, in charge of growth in tutum.co introduced me to Borja Burgos, the co-founder and CEO. Borja and I met this week in La Boulange, cafe on University Avenue in Palo Alto. He travels to Bay Area from his head-office in New York city every six weeks or so.
Photo 1: Borja (right) and Miha in La Boulange cafe, Palo Alto.

Where does the name "tutum" come from?

No, is not a  "tamtam" derivative,

It has no connection to actor Tatum Channing or actress Tatum O'Neal .

If you google with patience, you will discover the motto of the Scottish Crawford clan:

Tutum te robore reddam (I will give you safety by strength)

Voila! Tutum means safety in Latin, a very fit name for a company dealing not only with one container, but lots of docker containers, and focuses to Enterprise and huge data centers

The International Spaniard

Borja Burgos-Galindo is a citizen of the world. His parents lived many years in Germany, Belgium and US (where Borja was born). His brother lives in Kuala Lumpur, Malaysia. Borja worked at Capgemini, Wells Fargo and studied at Carnegie Mellon, Georgia Tech, University of Hyogo in Japan. He lived in Chile and Argentina, but before that he lived in Kobe (Japan), Pittsburg, San Francisco, Warsaw, Atlanta, Charlotte, Mons (Belgium) and Madrid. Now he settled in Williamsburg, Brooklyn, "one metro station away from Manhattan" and has an apartment 5 minutes walk from Tutum US office. He is 28 years old!

As a graduate student  at University of Hyogo in Japan he noticed  a security flaw in the university's card access system. Although it was not his area of expertise, he researched and quickly familiarized  with NFC technology. Borja developed a patch to improve the security of the system, and provided it to the university at no cost. "I'm proud of how things turned out"  he says

How did Tutum start?

Buenos Aires

"After finishing my master degree at Carnegie Mellon University (CMU), I was interested in cloud containers and security. I wanted to explore this potential in Enterprise and I moved to Chile. Then I contacted Fernando Mayo Fernández, the current CTO and co-founder. We know each other from elementary school. We got two small seed funds from NXTP Labs in Argentina Start-Up Chile in Santiago . So we moved to Buenos Aires starting to develop from October 2013 a cloud container as a service service, something that will sit between the IaaS and PaaS. We were three engineers on this project: Bernardo Pericacho, Fernando and I. We wanted to create something that manages the IaaS and simple enough to use like Heroku"

Borja knew containers will play a key role in the cloud. Google, IBM, and all big players started to develop container services.

New York

"After three month, the program ended and all three of us moved back to Madrid. From Madrid we approached Techstars startup accelerator in New York City and we got funded. Fernando, Bernardo and I moved to US. Bryan Lee, a friend of mine from Georgia Tech joined us to deal with growth. Two more engineers joined , one is Hongling Feng, a close friend a from CMU
Photo 2: From left: Bryan Lee, Fernando Mayo, Bernardo Pericacho, Borja, Graham Caroll and Hongling Feng
This was May 2014, not even a year ago. I imagine the tutum team share a similar sense of humor, ambitions, passion and dreams This adds to their technical creativity. We all know incubators and VCs look at the people first and how they work together. Tutum is like a dream candidate team that any experienced VC firm will pay attention to.

The Photo 2 is artsy, they all look like the cast of a hit Broadway show. Here is how they look in everyday life
Photo 3: Drinking beer from left: Daniel Zavala, Alberto, Fernando, Pablo, and Borja.

What Tutum offers?

"By June 2014, we were offering our hosting and services of containers. By that time Docker grew immensely since I fist heard of them in March 2013, when they were a small operation. So by Septembers 2014, we stop including the hosting as part of our service......  We still run in beta and we expect to go out of beta by June 2015."

Docker and Solomon Hykes name were everywhere. This clarified the new direction of Tutum.

This is what Bryan Lee wrote in a  September 2013 blog entry - a few weeks after Tutum reached $2.7 million in funding:
The objective of Tutum is to become a complete container platform on which to build, deploy and manage your containerized applications. The end goal is to provide a service that is simple, but does not compromise on control and flexibility.
Photo 4: The historic meeting of Tutum with Docker
From left Ben Golub CEO Docker, Borja CEO Tutum, Fernarnado CTO Tutum and Solomon Hykes, CTO Docker

Docker and Tutum

Lucas Carlson of CenturyLink Labs asked Borja in an interview
 Why not just run your own docker? So docker the whole premise of Docker is that you can set it up very easily. Any linux distribution can run a Docker daemon and you should just be able to set up Docker and run containers anywhere. Why not just set up a Digital Ocean or a CenturyLink Cloud virtual machine, put Docker on it, and deploy your apps that way? Why use a hosted Docker?
This  "Why?"  is from Simon Sinek   : "People don't buy what you do, they buy why you do it."

 Borja reply is crystal clear:
 Docker is great, they’ve been able to put a great interface on some primitives and build a great open source project. But at the end of the day the Docker container is nothing but a building block. An awesome building block, but a building block. Meaning the moment you start trying to do containers at scale, you start running into problems. How do I run containers in two different hosts for redundancy purposes? How do I get visibility into what containers deployed into which hosts? How am I supposed to load balance the traffic that’s coming to the different containers running across multiple hosts, across multiple clouds, right? And these problems are not solved by the basic building block, which is the Docker container. Hence, the reason for having something on top. That layer of orchestration, management, deployment, and that is what Tutum is.

When not everything can be dockerized

This is another reason to use Tutum on specific Data Centers, says Borja:
Today anything that satisfies the 12-factor stateless application, it’s a great use-case and we work seamlessly. But if I want to run MySQL today with Tutum, the data that has been stored in MySQL would die when that container dies. Now that isn’t great for anything close to production level systems. So that is the one thing we need to keep developing and working. We hope to have a solid solution for that in the future,  a persistent storage solution.
See Note  (1)


Stackoverflow thread  Does it make sense to dockerize (containerize) databases? has a clear answer on why the ideal Docker solution needs to have data inside the container - a problem not solved yet.
One of the main reasons for containerizing your databases is so that you can have the same consistent environment for your entire app, not just the stateless parts, across dev, staging and production. A consistent environment is one of the promises of docker, but when your database lives outside this model, there is a big difference that can't be accounted for in your testing. Also, by containerizing your database as well as the rest of your app, you are more likely to be able to move your entire app between hosting providers (say from AWS to Google Compute). If you use Amazon RDS, for example, even if you can move your web nodes to Google, you won't be able to move your database, meaning that you are heavily dependent on your cloud provider.
Another reason for containerizing data services is performance. This is particularly true for service providers (all the database as a service offerings- e.g. rackspace cloud databases- run in containers), because containers allow you to provide service guarantees that aren't possible using virtualization, and running one database per physical machine is not financially viable. 
In an Enterprise environment , data is a grey area when deploying containers - at least for now.

It is nice to know that Tutum, - whose CEO,  as a student "noticed a security flaw in the Hyogo university's card access system" and fixed it -  will take care of the container deployment  in your Enterprise data in the best possible way

Is Docker worth using for managing apps on a  personal computer?

This is actually the essence of Docker revolution.  Here is a reply on Quora to this question from
Zachary Smith, CEO of Packet.net  as user
I think that docker is a fantastic tool for developers looking to install and run packages app/services on their local workstations and run in sync with a complex staging + production environment.  We've been running a full dev lifecycle this way at Packet
(packet.net Premium Bare Metal and Cloud Hosting) and, after getting through some initial hiccups, its been a game changer for our engineers.

Who needs Tutum?

All Fortune 500 companies need Tutum, says Borja.  But the need is one thing, to buy is another thing We live in an era of developer-focused startups which are now funded and Tutum container offer must be easy to use and must create passion in users.

In my humble opinion, Docker, the open source project creates the passion. In December Docker launched Docker Hub Enterprise, a place where both developers and system administrators can go to find Linux containers that have already been formatted for specific business purposes. Cool, but using those containers requires an easy to use orchestration that can be offered by external Docker- focused companies.

Developers and DevOps are people too and having technical skills does not mean  they will use whatever. They have preferences, likes, sentiments, spouses. Tutum must enter their lives in and make their work a pleasure.

The Tutum size and capitalization is tiny compared to the requirement of the big guys as enterprise customers. They need first alliances

Tutum is actively hiring UX young graduates in this hot new field. See job openings on their web site.

Native Support

Tutum claims it can work on any IaaS platform using an agent. But they have native support on certain platforms, and  they may become part of the midnight Docker dream. For now Tutum has native support for AWS, Digital Ocean and Azure.

Conversations with Joyent

Borja met with Bryan Cantrill , Joyent's CTO to create a Tutum landing place on Joyent's infrastructure and Data Center Software

Joyent SmartOs containers will run Linux applications. SmartOs is the successor of Solaris containers - called Zones. Zones are similar in concept to Jails (enhanced chroot), but also take advantage of features included with ZFS ­— namely snapshots and cloning. This gives the ability to quickly clone or duplicate a current zone into a new zone.

 See Note (2)

Also for Tutum on Joyent, there is no requirement to start a separate Linux server to use docker. One can deploy directly on Joyent IaaS.

A partnership Joyent with Tutum  creates credibility in the Fortune 500 space. The resulting reliability will be music to the ears of the security obsessed CIOs who want Docker, but they are afraid of its' newness. Tutum newsletter includes Docker news for those readers who had a lot of questions, but they were afraid to ask. Tutum are are always gentle, non-elitist and give the users confidence.

At least, this is how I felt.

My take up.

Tutum is an extraordinary team made up of motivated dreamers who are living the best time of their lives. There is no need to penetrate all 500 Fortune accounts. If they have only five happy Fortune 500 customers, Tutum equity will go ballistic.

Experience has shown that competent technical entrepreneurs without previous experience can learn business fast, They need to execute an original sales plan for enterprises, have clarity, and then ask for additional funding

Looking at  my crystal ball, even Docker itself might be interested in Tutum, - who will have a lot of suitors.

 I don't make predictions, As Bryan Cantrill from Joyent says predictions in essence are logging the present opinions - what we think the future will be at a given moment. He calls 2015 the year of the Docker

It will make sense for  Docker to merge one day with Tutum  to create a billion plus equity. They share the same European culture planted in the rich soil of entrepreneurship here in US, I can see Tutum becoming a top acquisition target in 2015, whether or not this acquisition will materialize or not.

An independent blogger, like me, is like "le buffon du roi" (the king's jester). We have a license to say what we think - as we don't have some worried investors fretting on our backs.

Tutum is set to dockerize the Enterprise. You better take them seriously

Note (1)  

Tutum added on February 4, 2015 Release 0.11.5 

"Volumes: directories to hold reusable and shareable data (by the same service when it's redeployed, or by other services) stored outside of the container's filesystem that can persist when containers are terminated."

 This essentially solves the previous lack of persistent storage in tutum

Note (2)

Bryan Cantrill  revolutionary Joyent presentation and demo on Docker and the future of containers in production, explains The video - worth watching - is here  

We believe that the developer ease of Docker combined with the proven production substrate of SmartOS and SmartDataCenter yields the best of all worlds • The future of containers is one without compromise: developer efficiency, operational elasticity, multi-tenant security and on-the-metal performance!

Blog Archive

About Me

My photo

AI and ML for Conversational Economy