My 5 Enterprise Cloud Predictions for 2013

imageI believe that this is the year when the enterprise will find its way to the cloud.

The mega Internet sites and applications are the new era enterprises. These will become the role models for the traditional enterprise. IT needs remain the same with regards to scale, security, SLA, etc. However, the traditional enterprise CIO has already set the goal for next year: 100% efficiency.

The traditional CIO understands that in order to achieve that goal, IT will need to start and do cloud, make sure that IT resources are utilized right, and that his teams move fast.

Continue reading

Amazon AWS is the Cloud (for now anyway)

Every day I talk, write and comment about the “Cloud”. Every time I mention the cloud I try to make sure that I add the name of the relevant cloud operator, “Rackspace Cloud, “MS Cloud” (Azure) or “HP Cloud”. Somehow all of these cloud titles don’t right to me – it seems the only title that really works for me is the “Amazon Cloud”. In this post, I will elaborate about the competition in the IaaS market and I will explain further why I think this is so.

HP vs. Amazon AWS

Earlier this month, HP announced release of a public cloud offering based on Openstack in a public bet. Zorowar Biri Singh, SVP and GM for HP Cloud Services, admitted that HP is really late to market and he also added that:

HP believes that startups – particularly those that sell services to enterprises – will want to move off Amazon as they grow but won’t want to build their own data centers. Read more

Last year I attend the HP cloud tech day. It was amazing to see this giant fighting for its life on the IT field. It is one thing to be able to promote public cloud, but you also need to select your words carefully. Singh’s statements aren’t in line with a public cloud strategy; on the contrary, they focus on the fact that HP’s state of mind is not ready for delivering a true public cloud. Establishing a public cloud is one thing, but leading with the right strategy is what counts – trivial isn’t it?

We’re not necessarily the first place a startup is going to look for in getting going. But I can assure you we’ve also got the type global footprint and an SLA and a business-grade point of view that understands the enterprise. That’s what we’re betting on.

I strongly suggest Mr. Singh be more careful. Specifically, these types of statements remind me of Kodak –  they claimed to have a strong hold on the market, they maintained that as people shoot more digital photos eventually they will print more. On January this year the 131-year-old company filed for bankruptcy.

SAP on Amazon AWS

AWS and SAP Announced Certification of AWS Infrastructure for SAP Business All-in-One Solutions Research Study Shows Infrastructure Cost Savings of up to 69% when Running SAP Solutions on AWS Read More

Due to market demand forces, SAP was forced to find its way in the cloud. In 2007, SAP announced the launch of BusinessByDesign, its (SaaS) On-Demand initiative, with no success while their customer base drifted to companies like Salesforce and Netsuite. This month SAP finally announced that they believe in the public cloud by making an interesting supportive move and partnering with the Cloud – Amazon AWS.

Customers now have the flexibility to deploy their SAP solutions and landscapes on the scalable, on-demand AWS platform without making required long-term commitments or costly capital expenditures for their underlying infrastructure. Learn more about the offering. Read More

This SAP certification strengthens the AWS position in the enterprise (for your attention Mr. Singh). IMHO SAP made a great decision to “go with the flow” and not resist it.

Openstack vs. Eucalyptus for Amazon AWS

Openstack was initiated by Rackspace and NASA in 2010. Today this cloud open source project is supported by about 150 IT and hardware companies such as Dell and HP, which trust this platform and are investing in building their public cloud with it.

It’s maybe two or three years before OpenStack will have matured to the point where it has enough features to be useful. The challenge that everyone else has is Amazon is not only bigger than them, it’s accelerating away from them.   –Netflix cloud architect Adrian Cockcroft

In March of this year, Amazon guys published their belief in the private and hybrid cloud by announcing their signed alliance with Eucalyptus, which delivers open-source software for building an AWS compatible private cloud. In April, Eucalyptus published its $30M series C funding. Together with Amazon and SAP’s joining of forces, this accentuates the fact that Amazon AWS is very seriously about conquering a share of the enterprise IT market (again ..for your attention Mr. Singh). This week I attend IGTCloud  OpenStack 2012 summit in Tel Aviv. I was hoping to hear some news about the progress and the improvement of this platform and I found nothing that can harm the AWS princess for the next few years. OpenStack is mainly ready for vendors who wants to run into the market with a really immature and naive cloud offering. I do believe that the giant vendors’ “Openstack Consortium” will be able to present an IaaS platform, but how much time will it take? Does the open cloud platform perception accelerate its development or the other way around? Still, for now, Amazon is the only Cloud.

Microsoft and Google vs. Amazon AWS

This month Derrick Harris published his scoop on GigaOm –  “Google, Microsoft both targeting Amazon with new clouds”. I am not sure whether it is a real scoop. It is kind of obvious that both giants strive to find their place in Gartner’s Gartner Magic Quadrant report:

IaaS by Gartner

With regards to Microsoft, the concept of locking in the customer is in the company’s blood and has led the MSDN owner to present Azure with its “PaaS first” strategy.  I had several discussions with MS Azure guys last year requesting to check the “trivial” IaaS option for self-provisioning of a cloud window instance. Already back then they said that it was on their roadmap and soon to be available.

This month AWS CTO Werner Vogells promoted the enablement of RDS services for MSSQL on his blog, noting:

You can run Amazon RDS for SQL Server under two different licensing models – “License Included” and Microsoft License Mobility. Under the License Included service model, you do not need to purchase SQL Server software licenses. “License Included” pricing starts at $0.035/hour and is inclusive of SQL Server software, hardware, and Amazon RDS management capabilities.

Is that good for Microsoft? It seems that Amazon AWS is the one to finally enable Microsoft platforms as pay-per-use service that is also compatible with the on-premise MS application deployments. One can say that by supporting this new AWS feature, Microsoft actually supports the natural evolution of AWS to become a PaaS vendor, putting their own PaaS offering at risk.

IMHO, Google is a hope. The giant web vendor has the XaaS concept running in its blood, so I believe that once Google presents it IaaS offering it will be a great competitor for AWS and Openstack ways. Another great advantage of AWS over these guys, and others, is its proven “economies of scale” and pricing agility. Microsoft and Google will need to take a deep breath and invest vast amounts of money to compete with the AWS – not only to build an IaaS vendor experience  but to improve upon their pricing.

Final Words

I can go on and discuss Rackspace cloud (managed services…) or IBM smart (enterprise…) cloud. Each of these great clouds has its own degree of immaturity in comparison to the Cloud.

Last week I had quick chat with Zohar Alon, CEO at Dome9, a cloud security start-up. The new start-up implemented its service across respectable amount of cloud operators.

I asked Mr. Alon to tell me, based on his experience, whether he agrees with me about the state of the IaaS market and the immaturity of the other cloud vendors in comparison to AWS cloud. He responded:

 The foresight to include Security Groups, the inbound little firewalls that protect your instances from most network threats, was a key product decision, early on by Amazon AWS. Even today, years after Security Groups launched, other cloud providers don’t offer a viable comparable.

The cloud changed the way we consume computation and networking so we can’t (and shouldn’t be able to) call our cloud provider and ask them to “install an old-school firewall in front of my cloud”. Amazon AWS was the first to realize that, and turned what looked like a limitation of the cloud, into an advantage and a competitive differentiator! At Dome9 we work with DevOps running hundreds of instances in a multitude of regions and offer them next generation control, management and automation for their AWS security, leveraging the AWS Security Groups API.

I am sure that this basic security capability must be delivered by the cloud operator itself. Cloud company is a new perception, it is not technical – it is strategic. Amazon follows its strategy with some of cloud basic guidelines: Continuous Deployment, Fast Delivery, API first, Low level of lock in, Full visibility and honesty, and so on. When Amazon AWS started in 2006, people didn’t understand what they were doing though the company leaders understood the business potential. Without a doubt, for now anyway, the Cloud is Amazon.

(Cross-posted on CloudAve Cloud & Business Strategy)

My View on CloudConnect 2012

Last week I attended one of the most popular cloud technology conferences in the world – CloudConnect. The CloudConnect conference started about four years ago. Attending the event gave me a clear understanding of the market maturity and evolution rhythm. Check out the following sections for the main points on what I heard and learned:

>  >  >  >  >  Cloud Performance 

The underlying infrastructure performance, round trip time, bandwidth, caching and rendering are to be counted as the major features of an online service performance. In an interesting presentation by @joeweinman (known by his famous “Cloudonomics” theory), it was claimed that latency holds the greatest weight among these faetures. I encourage you to check out his new research – As Time Goes By: The Law of Cloud Response Time presents some good formulas, methods and considerations with regards to online services’ performance and latency (including simple facts, for example, that people tend to prefer selecting from fewer options on an online page –  so you can have less content on a page and achieve a better browsing performance).

“Multi-tenancy leads to noisy neighbor syndrome” noted @jungledave, Founder and CEO at SolidFire. It is known that the lack of SSD storage components in cloud offerings (mostly due to its high cost) results in uncertainty in cloud storage performance expectations. I invite you to listen to @neovise’s recent podcast with Dave, which discusses solid state disks (SSD) and cloud computing. FYI, Amazon AWS already caught on to the need for fast and robust storage capabilities and deployed DynamoDB on SSDs, which have the benefit of offering predictable performance and greatly reducing latency across the board.

The best presentations are like movies; they should be based on real cases (keep that message in mind, I talk about it more later). One such case is Netflix. Netflix CTO, @adrianco presented methods and principles of scaling data in the cloud including Big Data management, availability, performance security, and more. I suggest checking out his presentation (a cool prezi one), to get the list of vendors and AWS components Netflix uses to optimize its data delivery over the cloud.

It was funny that only the last session’s presentation made by @lmacvittie pointed out the “obvious” first – start from understanding what cause the performance issues and only then try to solve them. I say “obvious” because it is a fact that the appealing ease of provisioning cloud apps and resources leads to the “unknown cloud” symptom (due to the uncontrolled sprawl) that contributes to the uncertainty performance. The “unknown cloud” as an issue found great support in the next day’s morning keynote presentation by @gevaperry, who noted that “a lot has already said about CIOs who don’t know about their own cloud use”. Geva presented a survey that clearly shows that the cloud computing adoption decision in an enterprise is made by the development or business units and not by the IT team – Are you surprised? Read more.

From my deep familiarity with the market, I can confidently add that despite cloud consumers’ recognition of the need to “cut through the fog” of the cloud, proven ways to actually do so are not really available in today’s young market. 

>  >  >  >  >  DevOps doesn’t exist 

I attended the panel “In Search of Mad Cloud Skills” led by the cloud-famous @DavidLinthicum and composed of four IT leaders. David presented some great but simple questions that the participants seemed to struggle to answer. One trivial question – “What do you need to find in the candidate for a DevOp?”  brought discussion around to the obvious need to have someone with development skills who also understands the business needs. The title of the session was aligned with the actual comments of the panel members, saying it is difficult (“Mad”) to find the right skills for their DevOp team.

For me, this session brought an end to the debate of NoOps vs. DevOps. The “DevOps team” is in fact  a development team that plays with virtual blocks in the cloud kindergarten. Integrating the product with the cloud is actually a task for R&D under the auspices of the CTO. That leads to the understanding that the enterprise CIO is actually the new enterprise CTO; if we talk about an ISV, then the CIO holds another position as a senior R&D team leader. NoOps rules and the CIO should look for architects and developers. Learning the building blocks of the cloud and the APIs is one task for the R&D (I remind you: “Research and Development”) team same as learning the overall software offering and the supported business workflows.

>  >  >  >  >  The Openness of Cloud 

Wednesday’s keynote included a panel with Redhat, Citrix and Rackspace, which was moderated by @acroll (a great moderator and presenter) discussing the “Open” perception in the cloud.

The great discussion about the Openness of the cloud actually led to some online #ccevent tweets including the phrase “Open washing”, strengthening the fact that some of the traditional mega vendors are actually “cloud washers” that present the “enterprise cloud” which is in fact an hosted environment supported by a traditional professional service. (You can check out my opinion of HP cloud offerings on a past post.)

“An enlightening panel at #ccevent was the “open cloud” conversation but not for the right reasons. ‘Open washing’ season has started.” tweeted @swardley 

These vendors not only struggle with the fact that Amazon is taking big chunks of their main market but also with the fact that it is hard for them to prove the profitability of real cloud delivery offering based on a real pay-per-use model.

“Citrix: we hate VMware. Red Hat: we hate Microsoft. Rackspace: we hate Amazon”, tweeted @acroll once he got off the stage

Cloud put the need for “Open” on the table. It makes the IT (including the traditional enterprise one) consumers to look for open systems including open source ones. The cloud force IT vendors to prove their low level of lock-in and robust API to enable their customers update and custom the application at a low cost with no touch – check MS Azure marketing messages in regards to their efforts to support open source frameworks (though I am not sure that they really “open”).

Open” is definitely one of the important criteria to decide to go with a solution vendor. The “open” cloud vendor shares its code with the community in order to help others come with better solutions including its own customers. The “open ISV” doesn’t afraid to “lose” its code propriety to competitors and find that being “open” actually increase awareness and positive view of its brand as well as the maturity of its offering.

>  >  >  >  >  “Amazon is Snow White” said @adrianco 

At first I was not sure why Amazon didn’t exhibit at the famous CloudConnect conference but after asking several important people this question, the simple conclusion is that as the strongest market leader Amazon can afford to leave the marketing efforts to the crowd. As the beautiful princess in town you attend only to your own parties and you definitely don’t want to position yourself among the dwarfs.

CloudConnect was really about the major IT market disruption Amazon has been leading for the past few years. In almost every session, the discussion about cloud was actually a discussion about Amazon AWS offering and its design partner – Netflix. Every other offering such as OpenStack, Rackspace cloud and IBM cloud offerings are always being compared with the AWS cloud. The final thought of suggesting they change the name CloudConnect to AWSConnect never entirely left my head (although this might make some of the@Clouderati guys really uncomfortable).

Q: What did the CloudConnect miss?  A: Real Case Studies 

I noted above that great movies are based on real stories, same here. I wasn’t in all the sessions but being a dedicated follower of #ccevent and listening carefully to some of the leading thinkers in the industry, I think that most of the sessions were still on more theoretic levels rather than practical ones. You are welcome to check these conference presentations. 

It is not surprising that the best sessions were those presented by organizations that already found their way to the cloud, whether fully public (Netflix), or mostly private (Zynga zCloud). I suggest you to find Zynga’s CTO Infrastructure lecture in the conference recorded videos list.

Personally, I think it would have been great if they had a greater number of sessions and stories based on actual cloud architectures, shifting legacy applications to the cloud, and actual stories of ROI optimization. The market is still totally immature and on shaky ground. Vendors don’t really know how to present their offerings and even the simple phrase “cloud cost” have several interpretations. ISVs and enterprises are misled by the mega vendors – this is one of the major factors that slow down cloud adoption pace. If six months ago I would have said 2-3 years to reach market saturation, CloudConnect made be more realistic and think more about 3-5 years.

CloudConnect was a great opportunity for me to meet all the cloud rockstars I had been twittering with over the last year – great cloud evangelists. Someone said that he felt like walking through the twitter home feed. I found the cloud in twitter – great performance, mobile, open and available. It proves cloud serves my actual needs for networking, communications and knowledge.

Yours,

@iamondemand

The Cloud in HP’s Cloud

Last week I was invited to the HP Tech Day in HP’s campus in Houston to learn and hear more about the giant’s cloud offering. I appreciate HP and Ivy very much for the invitation and for a great event where I was able to learn more and see these clouds in real. I had the privilege to meet savvy and professional guys. It is always great to see people that are enthusiastic on their jobs and are proud of their company. Let me share with you HP’s cloud from my point of view.

> > > The EcoPOD

HP’s guys took me and a my fellow bloggers on a great journey inside HP’s cloud. The most fascinating adventure from me was the HP EcoPOD, an out-of-the-box, ready-made hosting/cloud infrastructure creature. The finalization of the product seems to be a perfect art and with no doubt HP is still a great infrastructure market leader. The Ecopod units serves IaaS providers, huge enterprises and mega websites. The investment of buying this ready-made bank of servers can be stretched from 3 to 5 years commitment so you can actually consider that as a subscription based service. The HP private cloud offering ruled the tech day including support for bursting internally or over to a public cloud, supported by Saavis. Read more about HP’s cloud bursting on TechTalk by Philip Sellers

> > > The Cloud In HP’s Cloud

The second part of the IaaS is the software for provisioning, maintaining and controlling of the cloud resources. For that matter HP conduct a several hours of demonstration of its CloudSystem product. Once the cloud infrastructure deployed, the enterprise can provision the virtual resources, orchestrate and create a catalog of app stacks utilizing the CloudSystem. One of the main features of the platform is the Cloud Maps (I really love the name) that enables the enterprise’ IT to plan and create new app stacks or even import ready made ones straight from the HP web portal. The UI/UX is very compelling though the management capabilities are very basic. I am not sure that I saw a real cloud environment but an upgraded virtualization control and provisioning application. Following my debates on that I was told that there are some implementations of an elastic environment using custom adjustments. HP also revealed that they are working on an OpenStack implementation though I wasn’t convinced enough to believe that there are serious plans for this matter. Due to the lack of out-of-the-box features such as auto-scaling and elasticity as well as the lack of a real cloud perception that a server is just one atomic unit, I still wonder where is the cloud in HP’s cloud ?


On a “cloud security” session, I raised a basic cloud security issue, where the enterprise need to be able to maintain SSO and IAM solutions to all its applications’ portfolio including the SaaS ones. I asked to know if HP support that kind of features or plan to do so in the future. The HP response was not satisfying and led me to think again about the extreme separation between the infrastructure and the applications that the cloud brought. The answer I anticipated to hear was really simple: As an IaaS provider, HP focuses on the internal network security and the access to the on-premise physical and virtual resources. The SaaS players have the responsibility to provide extensions that integrate with the enterprise private cloud and support issues such as SSO.

It is an evident that the cloud brought the need to re-position the traditional IT vendor offerings and make sure these are related to the specific cloud layer (IaaS, PaaS or SaaS), otherwise it is a confusing play that presents a great risk to the business future.

> > > Conclusion

It is clear that this veteran market leader as other IT giants finds itself segmented into a new definition as an IaaS vendor. The giant struggles getting into a leadership position in this emerging market as it is surrounded by a great competition coming from old competitors such as IBM or Oracle. Furthermore I think that a greater competition comes from the advanced cloud vendors such as Amazon, Rackspace, Salesforce and more others that already taking a great market share. I find it exciting to watch the market evolves, how new business threats are born and how the industry giants pushing hard to find their golden path all over again.

The Cloud Lock-In (Part 1): Public IaaS is Great !

It always good to start with Wikipedia’s definition as it helps to initiate a structured discussion, here is Wiki’s definition for Lock-In:

“In economics, vendor lock-in, also known as proprietary lock-in or customer lock-in, makes a customer dependent on a vendor for products and services, unable to use another vendor without substantial switching costs. Lock-in costs which create barriers to market entry may result in antitrust action against a monopoly.” Read more on Wikipedia

Does the cloud present a major lock-in ? Does the move create substantial switching costs?

“Yes !” is the common answer I hear for those questions. In this article I will debate it basing my findings on real cloud adoption cases.

Generally in terms of cloud’s lock-in, we face the same issues as in the traditional world where the move includes re-implementation of the IT service. It involves issues such as data portability, users guidance and training, integration, etc.

“I think we’ve officially lost the war on defining the core attributes of cloud computing so that businesses and IT can make proper use of it. It’s now in the hands of marketing organizations and PR firms who, I’m sure, will take the concept on a rather wild ride over the next few years.”

The above statement I bring from David Linthicum’s article “It’s official: ‘Cloud computing’ is now meaningless”. Due to my full consent with Linthicum on that matter, I will be accurate and try to make a clear assessment of the cloud lock-in issue by relating each of the three cloud layers (i.e. IPS aaS) separately.

In this part, I will relate to the most lower layer, the IaaS lock-in.

It is a fact that IT organizations take advantage of the IaaS platforms by moving part or even all of their physical resources to the public clouds. Furthermore, ISVs move at least their test and development environments and making serious plans to move (or already moved) part of their production environment to the public clouds.

Read more about shifting legacy systems to the cloud by Ben Kepes

Discussing with a public IaaS consumers, it always come to the point where I ask “do you feel locked on your cloud vendor ?” most, if not all of the companies’ leaders claim that the public clouds’ values (on-demand, elastic, agility,ect) overcomes the lock-in impact so they are willing to compromise. As a cloud enthusiastic it is great for me to see the industry leaders’ positive approach towards moving their businesses to the cloud (again too general – any of them refer to a different layer). I do not think that the lock-in is so serious.

For sometime this claim sounded pretty reasonable to me though on second thought I find that the discussion should start from a comparison with the traditional data center “locks”. Based on this comparison I can already state that one of the major public cloud advantages is the weak lock-in, simply because you don’t buy hardware. Furthermore, companies that still use the public cloud as an hosting extension to their internal data center, don’t acquire new (long term or temporary) assets that they can’t get rid of without having a major loss. In regards to its lock-in the public cloud is great !

Another important explanation related specifically to Amazon AWS products which support SaaS scalability and operations. Smart SaaS architect will plan the cloud integration layer, so that the application logic and workflow will be strongly tied with the underlying IaaS capabilities such as on-demand resources auto provisioning.

Read more about the relationship between web developers and the cloud

For example, the web can use the cloud integration layer to get on-demand EC2 resources for a specific point when a complex calculation occurs. In a superficial glance, the fact that the cloud API used as a part of the application run-time script holds an enormous lock-in risks. I disagree and let me explain why.

As a market leader, Amazon AWS will be (already is) followed by other IaaS vendors. Those will solve the same scalability and operational issues by the same sense and logic of AWS. Basically this means an evolution of IaaS platform standards. Smart cloud integration layer will enable “plug & play” a different IaaS platform or even orchestrate several in parallel. To strengthen my point I bring as an example several cloud start-ups (solving IaaS issues such as governance, usage and security) that developed their product to solve issues for Amazon AWS consumers and seriously target support of other IaaS vendors’ platforms such as Rackspace cloud and vCloud. In regards to lock-in the public cloud is great !

The IaaS vendors in the market recognize the common lock-in drawback of moving to the cloud. Vendors such as Rackspace brings the OpenStack which is a cloud software platform, so cloud vendors can build IaaS solutions upon it. Rackspace showing off on their blog site –

OpenStack™ is a massively scalable cloud operating system, powering the world’s leading clouds. Backed by more than 50 participating organizations, OpenStack is quickly becoming the industry standard for public and private clouds. Read More

It should be noted that applications and data switching between clouds is still complex and in some cases not feasible though believing in the public cloud’s future comes with understanding of its weak lock-in and will lead to visionary and long term strategic plans.

What about the private IaaS ?

Following my on going research on what is the best cloud option (i.e public, private or hybrid), I found that outsourcing the IT environment to a private or an hybrid includes a major lock-in. Implementation of a private or an hybrid cloud includes lots of customization, hence lack of standards. Private and Hybrid clouds have their benefits though lock-in is not one of them. The contract with the vendor is for 3 to 5 years at least (a data center’s typical depreciation period) on a non standard environment leads to an extreme, long term lock-in in terms of the “on-demand world”.

In order to decrease lock-in the IaaS consumer must prove the organization need for a private cloud by planning strategically for long term. Besides the ordinary due diligence to prove the vendor strength, the contract must include termination points and creative ideas that can weaken the lock-in. For example renewal of initial contract under re-assessing of the service standards, costs and terms in comparison with the cloud market, including the public one. The private cloud vendor must prove on-going efficiency improvements and costs reductions accordingly.

In his article Keep the ‘Cloud’ User in Charge”, Mark Bohannon, VP at Red Hat, Warns:

by vendors to lock in their customers to particular cloud architecture and non-portable solutions, and heavy reliance on proprietary APIs. Lock-in drives costs higher and undermines the savings that can be achieved through technical efficiency. If not carefully managed, we risk taking steps backwards, even going toward replicating the 1980s, where users were heavily tied technologically and financially into one IT framework and were stuck there.”

Some of the private cloud offering today have similar characteristics as the traditional data center, to me it seems that the former comes with a stronger lock-in impacts. In case of an IT transition companies who decide to go that way should expect a considerable switching costs and long term recovery of their IT operations hence of their business.

The second part will discuss the cloud lock-in characteristics in regards to the SaaS and the PaaS layers.

Developers are from Mars

The three layers of cloud computing IaaS, PaaS and SaaS occupy the headlines with significant capabilities undergo continuous improvement to host services in the cloud. This growing market is slowly changing so that offered services will become generic. The current evolving struggle is the deployment and management of SaaS applications in the cloud, Gartner calls this cloud market portion SEAP (Software Enabled Application Platforms). We will dare to say that developers are from Mars and cloud providers from Venus, let us explain in detail why.

SaaS application developer builds the application architecture structure including the database system, the business logic and the user Interface. The software developer (or the SaaS vendor for that matter) invests on building these main three infrastructure cornerstones in order to bring life to the business idea and launch a new on-line service.

Traditional software delivery puts the responsibility of deployment and maintenance in the hands of the customer. In contrast, the SaaS model key includes building the infrastructure wrapper that allows meeting the requirements to deliver it as a service. The change from the licensing model is that the SaaS vendor (the developer) is also the integrator and the responsible to support standards by adopting technologies which makes the software as a service.

The most popular example is the support of multi-tenant. This feature enables the scalability to perform extensive SaaS sales and effective maintenance on the  non-physical infrastructure.  The virtual infrastructure brings higher level of complexity which requires additional maintenance means. This complexity intensifies as the number of customers grows, hence the demand for more cloud capabilities and resources.

Developers use existing frameworks that enable a short and efficient development such as .NET provided by Microsoft or Ruby on Rails brought by the open source world. Software architects already understand that the application multi-tenancy is a part of the system infrastructure to enable scalability, but is the that enough to make an application as a service? the answer is no, there are more considerations the developer need to bear in mind when planning the architecture of software as a service.

Why Multi-Tenancy ?

In order to plan the development of a robust and automatic scalability, the software architect must understand the cloud dynamic nature that is to say the basic option to start and shut down resources automatically. The software vendor should pick the IaaS vendor as part of the initial development step, learn the IaaS platform’s API capabilities and make sure that the development roadmap includes also a tight integration with the cloud facility. The IaaS platforms offered are still young and automation deployment is still limited due to infrastructure barriers. Most of the IaaS platforms doesn’t provide convenient tools to deploy the application, therefore the SaaS vendoר is forced to invest in purchasing existing tools or even implement independently. Today we still see vendors that are not aware of these requirements as they are not pure application but operations oriented.

Learn how to Scale IT – an article by CloudInsights.org

Check out I Am OnDemand terminology page and learn more about the four levels of Multi-Tenancy.

Another aspect in the SaaS development discussion is the option to build the system on a PaaS. There is a good number of PaaS manufacturers that offer products enable development capabilities as a service and by that solve the developer’s need to maintain a scalable service as described above. We can divide the this group of products to following two categories: 

  1. Objects as a service – force.com is an example for such vendor. The developer will buy the option to use the out of the box software objects to implement a new application. 
  2. Runtime and database as a service – here we can mention platforms as Heruko, Google Apps and MS Azure.

Gartner predicts a growth in the amount of platforms that provide the wrapper for the web development of new and existing application. These platforms already have taken a significant part in the cloud evolution. The number of PaaS providers grows while the existing vendors continue to extend their on-demand tools portfolio, enabling a wide range of services for operation, management and distribution of SaaS applications.

Learn more about the PaaS market

Besides the actual system scalability issues presented here, there are much more “developing for the cloud” considerations such as integration, develop for resources’ optimal utilization and SaaS development with the fast changing clouds’ platforms. Check out Cloud development: 9 gotchas to know before you jump in, an article brought to you by InfoWrold.

The relationship between the actual application development and the operational side of the application becomes stronger. While the SaaS vendor’s board should think on all cloud adoption strategic aspects, the vendor’s software architect as well as the product manager should think “out of the application box” to be able to deliver their product as a service.


Special thanks for Amit Cohen who raised this discussion and took a part in composing this article. Cohen is an experienced  SaaS & Cloud computing consultant for the enterprise who held executive positions at several international software vendors over the last 10 years.


Hybrid Cloudonomics – Part 2

The first part of Weinman’s lecture discussing the basic “go to the cloud” and demonstrating cloud environments’ loads of different corporations’ web applications. In this part we will bring 6 scenarios presented by Weinman, each includes a brief analysis and proof of its cost and benefits.

First lets start with several assumptions and definitions:

> > > 5 Basic assumptions Pay-per-use capacity model:

  1. Paid on use – Paid for when used and not paid for when not used.
  2. No depend on time – The cost for such capacity is fixed. It does not depend on the time or use of the request.
  3. Fixed unit cost – The unit cost for on-demand or dedicated capacity does not depend on the quantity of resources requested (you don’t get discount for renting 100 rooms for the same time).
  4. No other costs – There are no additional relevant costs needed for the analysis.
  5. No delay – All demand served without any delay.

> > > Definitions:

D (demand): Resources demand in a specific time interval. D is characterized by mean (average) and a maximum P (peak) . T (time) is the time duration in which the demand existed {D(t) ,0<t<T}. For example the average demand A can be 5 CPU cores with a peak P demand of 20 CPU cores.

Define C (cost) to be the unit cost per unit time of fixed capacity.

Define U to be the relation between the cost of resources in the cloud (pay-per-use) and a pure dedicated IT solution.

The following six cases presented by Weinman are part of the total eight cases presented in his article “Mathematical Proof of the Inevitability of Cloud Computing”:


Case 1:   U < 1

The simplest case where utility cost less than dedicated ==> Pure pay-per-use solution costs less than a pure dedicated solution.

Proof: The cost of the pay-per-use solution is A (average) * U (premium) * c (unit cost per time) * T (time of use), A*U*c*T. The cost of a dedicated solution built to peak is P(peak of D)*c*T. Since and A<=P and U<1 ==> A*U*c*T < P*c*T

Explanation: It is intuitively understood that if the cloud is less expensive per unit per time period, then the total solution based on paying only for the demand is a less expensive one.


Case 2 :   U = 1 and A = P

The utility premium is the same as the dedicated, and demand is flat (no peak) ==> a pay-per-use solution costs is equal to dedicated solution built to peak

Proof: The cost of the pay-per-use solution is A*U*c*T. The cost of a dedicated solution built to peak is P*c*T. Since U=1 and A=P, ==> A*U*c*T = P*c*T

Explanation: If there is no variability in the demand and the cost is the same, both alternatives have the same cost. That being said, we should remember the assumptions we are under, the very narrow scenario and the fact that we are not considering financial risks.


Case 3 : U = 1 and A < P

Pure pay-per-use solution costs less ==> a pure dedicated solution.

Proof: The cost of the pay-per-use solution is A*U*c*T. The cost of a dedicated solution built to peak is P*c*T. Since U=1 and A< P, Then: A*U*c*T = P*c*T

Explanation: This is very important for the understanding of the benefits for pay-per-use: if there is an element of variability, there is a major benefit to choosing this approach. Now let’s find out what happens in the case that the utility cost is greater than the fixed utility cost.


Case 4 : 1 < U < P/A

If the utility premium is greater than 1 and it is less than the peak-to-average ratio P/A, that is, 1<U<P/A then a pure pay-per-use solution costs less than a pure dedicated solution.

Proof: The cost of the pay-per-use solution is A*U*c*T. The cost of a dedicated solution built to peak is P*c*T. Since U<P/A, Than: A*U*c*T < A*PA*c*T = P*c*T

Explanation: What this means is that the utility unit cost can be higher than a fixed solution up to a certain point and still be the right economical choice. That point is a variable of the variation of the demand. In simple terms, we save money by not possessing unused resources when the demand is low.


Case 5 : U > 1 and TpT < 1U 

Lets add some definitions to the ones above:

  • Tp (peak duration) to be the duration where the demand was at peak
  • ε to be the gap between the actual peak and the per-defined peak (that is, if the resources demand exceeds (P – ε) we’ll use the cloud for our resources).

If U stands for how much more expensive the cloud is versus a fixed solution, in this case it will be easier to look at the Inverse of U (how much the fix solution is more expensive than the cloud). This case means that the percentage duration of the peak is less than the inverse of the utility premium, than a hybrid solution costs less than a dedicated solution.

Proof: The hybrid solution consists of (P ε) internal resources and the rest, ε will be handle on-demand by pay-per-use Tp of the time. The total cost equation is: 

 Given [(P ε) * T * c]+ [ ε * Tp * c * U ] and Our assumptions were: TpT  <  1/U   ==> Tp * U < T  and [ε * Tp * c * U] < [ε * T * c] ==> combine those ==> [(P ε) * T * c]+ [ ε * Tp * c * U ] < [(P ε) * T * c]+ [ε * T * c] ==> A dedicated solution cost is: [(P ε) * T * c]+ [ε * T * c] = P * T * c

Explanation: What that means is that there might be a less expensive way than internal fixed solutions if there is some variation of demand. Obviously an optimal solution should be according to its your own characteristics of demand.


Case 6 : “Long Enough” Non-Zero Demand

Lets define:

  • The total duration of non-zero demand to be TNZ. TNZ is the sum of all periods where the demand was above zero. 
  • Define ε to be the dedicated resources.

If the utility premium is greater than the dedicated and the percentage duration of non-zero demand is greater than the inverse of the utility premium, i.e., U > 1, and TNZT > 1/U than a hybrid solution costs less than a pure pay-per-use solution.

Proof – (This proof is the mirror image of the prior one). The cost of serving this demand with utility resources is: ε * TNZ * U * c. The cost of serving the demand with dedicated resources is: ε * T * c. Since TNZT > 1/U than T < TNZ * U Than ε * T * c < ε * TNZ * U * c 

Explanation: This means that you’ll need to consider using the cloud even if it’s more expensive to satisfy a portion of the demand and the baseline of your demand you use dedicated resources.


Let’s Summarize – 

The analysis Weinman does is basic including very strict assumptions. It ignores cloud enhanced pricing options (such as AWS spot and reserved instances). It is important to add that those options still doesn’t provided by most IaaS vendors hence this should be taken in mind when selecting an IaaS vendor. Nevertheless, this important research gives us an excellent opportunity to understand the overall approach and mechanisms which affect out cloud architecture decision.

 It is the the enterprise leader’s responsibility to treat their cloud establishment as part of the organization strategy including its architecture decision. From our experience and study of this evolving trend we found that sometimes the cloud decision might be taken by the operational leader (i.e. IT manager) without any intervention of the enterprise higher management. This is totally wrong, going forward the company will find itself suffer from huge cloud expenses and issues (such as security and availability) and will need to reorganize hence reinvest and hope it is not to late. In this post we presented another option for cloud deployment when a mixture of resource allocation from within the enterprise and from the cloud might be the best economic solution. We also saw that it depends on several factors like the variation of the demand and its prediction.


This is only a sneak peek to Weinman’s complete article “Mathematical Proof of the Inevitability of Cloud Computing” . To Learn more about the above scenarios and more, we strongly suggest to read it.