5 Key Essentials of Cloud Workloads Migration

imageThe benefits of migrating workloads between different cloud providers or between private and public clouds can only truly be redeemed with an understanding of the cloud business model and cloud workload management. It seems that cloud adoption has reached the phase where advanced cloud users are creating their own hybrid solutions or migrating between clouds while striving to achieve interoperability values within their systems. This article aims to answer some of the questions that arise when managing cloud workloads.

Continue reading

ClickSoftware – Great Case of an AWS Cloud Adoption: Part 1, Operations

imageOver the last year I had endless conversations with companies that strive to adopt the cloud – specifically the Amazon cloud. Of those I met, I can say that ClickSoftware is one of the leading traditional ISVs that managed to adopt the cloud. The Amazon cloud is with no doubt the most advanced cloud computing facility, leading the market. In my previous job I was involved in the ClickSoftware cloud initiative, from decision making with regards to Amazon cloud all the way to taking the initial steps to educate and support the company’s different parties in providing an On-Demand SaaS offering.

ClickSoftware provides a comprehensive range of workforce management software solutions designed to help service organizations face head-on the challenges of inefficiency. With maximizing the utilization of your resources is the lifeblood of your service organization and has developed a suite of solutions and services that reach the heart of the problem.

Continue reading

Amazon Cloud and the Enterprise – Is it a love story? (Free Infographic Included)

As befitting any great online vendor, Amazon cloud product guys listen carefully to their market targets and ensure fast implementation and delivery to satisfy their needs. It is clear that Amazon cloud is eager to conquer the enterprise market, as I already mentioned in my past post, “Amazon AWS is the Cloud (for now anyway)”.

Cloud Reserved Capcity Card

Key buzzwords that I expect are being used in Amazon HQ holes are “adoption” and “migration”. In order for the AWS cloud to reel in the big enterprise fishes, the cloud giant must go with the flow. This week Amazon cloud announced “AWS Cost Allocation For Customer Bills” – As a matter of fact Amazon announced that it believes in instances’ tagging – why? in the cloud, where a single instance doesn’t count, do you need a tag? The answer is simple – enterprise customers’ requests.

Adoption, TCO and ROI

In the past I had an interesting discussion with a cloud oerations VP of a great known traditional ISV (independent software vendor) about how after their POC on AWS they found that the costs are not feasible, and they wanted to go back on-premises. The winds of rejection, such as “our servers are better” and “why pay so much when I already could buy these”  (someone once called these IT guys the “server huggers”) are still there. Amazon understands that and strives to fill the gap between their advanced “cloud understating” and the traditional perception of the enterprise.

This week Amazon published an important white paper – The Total Cost of (Non) Ownership of Web Applications in the Cloud . Finding it important AWS marketing guys promoted it everywhere from Werner’s (AWS famous CTO) blog  all the way to TechCrunch. The PDF write-up done  by Jinesh Varia, one of the most respected Technology Evangelists at Amazon. The article presented three cases of online site utilization, starting from a “Steady State Website” to “Spiky but Predictable”, all the way to “Uncertain and Unpredictable”. The article discusses the cost differences between on-premise and on AWS. Without a doubt, AWS is much better if only because of its on-demand elastic capacity. Besides being a great informative educational piece, the article serves as an important support guide for enterprise CIOs who wishes to prove that AWS is worth the investment and that ROI exists.

Reserved is the new Dedicated

Yesterday, Newvem cloud usage analytics published a cool infographic that reveals details behind AWS including the types of customers and their cost improvement opportunities. Check it out below (disclosure: I am the company cloud evangelist and community Chief). It is not a surprise that the enterprise customers start small with AWS on-demand instances, while suffering from major costs. Many enterprise CIOs and DevOps that use AWS are confronted with the dilemma  of whether or not to move their cloud off AWS to a private cloud, usually when they’re footprint has scaled to a high level and opportunities for cost savings from alternatives become more attractive. The only way to understand the exact balance point between on-demand and reserved capacity is by analyzing your past patterns – Newvem does exactly that and more.

It is all about your usage. For example, in order for a Costco membership purchase to make sense, you have to know how much you and your family will use for the year (for example, how much cereal your children will eat).The same principle applies here with Reserved Instances (at least for the light and medium plans). AWS cloud customers are not buying the actual instance as a dedicated server but pay upfront to get an ongoing discount point. In order for Reserved Instances to make sense, a consistent amount of usage over a 1-3 year period must be identified. Though the fact it is not a dedicated hardware the reserved instance feature can help the AWS sales guy to offer a dedicated capacity to the potential enterprise CIO.

Last Words

I believe that Amazon already has a significant toehold inside the enterprise. The AWS cloud enables innovation and makes a great difference in how IT is consumed. Enterprise changes in perception take time and AWS understands that. The cloud hype is everywhere, but at the end of the road the cloud elasticity just makes sense – not only for the small niche SaaS vendor but also and maybe even more so for the traditional enterprise. Indeed a love story!


Demystifying Amazon Web Services
by newvem.Check out our data visualization blog.
(Cross-posted on CloudAve)

My View on CloudConnect 2012

Last week I attended one of the most popular cloud technology conferences in the world – CloudConnect. The CloudConnect conference started about four years ago. Attending the event gave me a clear understanding of the market maturity and evolution rhythm. Check out the following sections for the main points on what I heard and learned:

>  >  >  >  >  Cloud Performance 

The underlying infrastructure performance, round trip time, bandwidth, caching and rendering are to be counted as the major features of an online service performance. In an interesting presentation by @joeweinman (known by his famous “Cloudonomics” theory), it was claimed that latency holds the greatest weight among these faetures. I encourage you to check out his new research – As Time Goes By: The Law of Cloud Response Time presents some good formulas, methods and considerations with regards to online services’ performance and latency (including simple facts, for example, that people tend to prefer selecting from fewer options on an online page –  so you can have less content on a page and achieve a better browsing performance).

“Multi-tenancy leads to noisy neighbor syndrome” noted @jungledave, Founder and CEO at SolidFire. It is known that the lack of SSD storage components in cloud offerings (mostly due to its high cost) results in uncertainty in cloud storage performance expectations. I invite you to listen to @neovise’s recent podcast with Dave, which discusses solid state disks (SSD) and cloud computing. FYI, Amazon AWS already caught on to the need for fast and robust storage capabilities and deployed DynamoDB on SSDs, which have the benefit of offering predictable performance and greatly reducing latency across the board.

The best presentations are like movies; they should be based on real cases (keep that message in mind, I talk about it more later). One such case is Netflix. Netflix CTO, @adrianco presented methods and principles of scaling data in the cloud including Big Data management, availability, performance security, and more. I suggest checking out his presentation (a cool prezi one), to get the list of vendors and AWS components Netflix uses to optimize its data delivery over the cloud.

It was funny that only the last session’s presentation made by @lmacvittie pointed out the “obvious” first – start from understanding what cause the performance issues and only then try to solve them. I say “obvious” because it is a fact that the appealing ease of provisioning cloud apps and resources leads to the “unknown cloud” symptom (due to the uncontrolled sprawl) that contributes to the uncertainty performance. The “unknown cloud” as an issue found great support in the next day’s morning keynote presentation by @gevaperry, who noted that “a lot has already said about CIOs who don’t know about their own cloud use”. Geva presented a survey that clearly shows that the cloud computing adoption decision in an enterprise is made by the development or business units and not by the IT team – Are you surprised? Read more.

From my deep familiarity with the market, I can confidently add that despite cloud consumers’ recognition of the need to “cut through the fog” of the cloud, proven ways to actually do so are not really available in today’s young market. 

>  >  >  >  >  DevOps doesn’t exist 

I attended the panel “In Search of Mad Cloud Skills” led by the cloud-famous @DavidLinthicum and composed of four IT leaders. David presented some great but simple questions that the participants seemed to struggle to answer. One trivial question – “What do you need to find in the candidate for a DevOp?”  brought discussion around to the obvious need to have someone with development skills who also understands the business needs. The title of the session was aligned with the actual comments of the panel members, saying it is difficult (“Mad”) to find the right skills for their DevOp team.

For me, this session brought an end to the debate of NoOps vs. DevOps. The “DevOps team” is in fact  a development team that plays with virtual blocks in the cloud kindergarten. Integrating the product with the cloud is actually a task for R&D under the auspices of the CTO. That leads to the understanding that the enterprise CIO is actually the new enterprise CTO; if we talk about an ISV, then the CIO holds another position as a senior R&D team leader. NoOps rules and the CIO should look for architects and developers. Learning the building blocks of the cloud and the APIs is one task for the R&D (I remind you: “Research and Development”) team same as learning the overall software offering and the supported business workflows.

>  >  >  >  >  The Openness of Cloud 

Wednesday’s keynote included a panel with Redhat, Citrix and Rackspace, which was moderated by @acroll (a great moderator and presenter) discussing the “Open” perception in the cloud.

The great discussion about the Openness of the cloud actually led to some online #ccevent tweets including the phrase “Open washing”, strengthening the fact that some of the traditional mega vendors are actually “cloud washers” that present the “enterprise cloud” which is in fact an hosted environment supported by a traditional professional service. (You can check out my opinion of HP cloud offerings on a past post.)

“An enlightening panel at #ccevent was the “open cloud” conversation but not for the right reasons. ‘Open washing’ season has started.” tweeted @swardley 

These vendors not only struggle with the fact that Amazon is taking big chunks of their main market but also with the fact that it is hard for them to prove the profitability of real cloud delivery offering based on a real pay-per-use model.

“Citrix: we hate VMware. Red Hat: we hate Microsoft. Rackspace: we hate Amazon”, tweeted @acroll once he got off the stage

Cloud put the need for “Open” on the table. It makes the IT (including the traditional enterprise one) consumers to look for open systems including open source ones. The cloud force IT vendors to prove their low level of lock-in and robust API to enable their customers update and custom the application at a low cost with no touch – check MS Azure marketing messages in regards to their efforts to support open source frameworks (though I am not sure that they really “open”).

Open” is definitely one of the important criteria to decide to go with a solution vendor. The “open” cloud vendor shares its code with the community in order to help others come with better solutions including its own customers. The “open ISV” doesn’t afraid to “lose” its code propriety to competitors and find that being “open” actually increase awareness and positive view of its brand as well as the maturity of its offering.

>  >  >  >  >  “Amazon is Snow White” said @adrianco 

At first I was not sure why Amazon didn’t exhibit at the famous CloudConnect conference but after asking several important people this question, the simple conclusion is that as the strongest market leader Amazon can afford to leave the marketing efforts to the crowd. As the beautiful princess in town you attend only to your own parties and you definitely don’t want to position yourself among the dwarfs.

CloudConnect was really about the major IT market disruption Amazon has been leading for the past few years. In almost every session, the discussion about cloud was actually a discussion about Amazon AWS offering and its design partner – Netflix. Every other offering such as OpenStack, Rackspace cloud and IBM cloud offerings are always being compared with the AWS cloud. The final thought of suggesting they change the name CloudConnect to AWSConnect never entirely left my head (although this might make some of the@Clouderati guys really uncomfortable).

Q: What did the CloudConnect miss?  A: Real Case Studies 

I noted above that great movies are based on real stories, same here. I wasn’t in all the sessions but being a dedicated follower of #ccevent and listening carefully to some of the leading thinkers in the industry, I think that most of the sessions were still on more theoretic levels rather than practical ones. You are welcome to check these conference presentations. 

It is not surprising that the best sessions were those presented by organizations that already found their way to the cloud, whether fully public (Netflix), or mostly private (Zynga zCloud). I suggest you to find Zynga’s CTO Infrastructure lecture in the conference recorded videos list.

Personally, I think it would have been great if they had a greater number of sessions and stories based on actual cloud architectures, shifting legacy applications to the cloud, and actual stories of ROI optimization. The market is still totally immature and on shaky ground. Vendors don’t really know how to present their offerings and even the simple phrase “cloud cost” have several interpretations. ISVs and enterprises are misled by the mega vendors – this is one of the major factors that slow down cloud adoption pace. If six months ago I would have said 2-3 years to reach market saturation, CloudConnect made be more realistic and think more about 3-5 years.

CloudConnect was a great opportunity for me to meet all the cloud rockstars I had been twittering with over the last year – great cloud evangelists. Someone said that he felt like walking through the twitter home feed. I found the cloud in twitter – great performance, mobile, open and available. It proves cloud serves my actual needs for networking, communications and knowledge.



Hybrid Cloudonomics – Part 2

The first part of Weinman’s lecture discussing the basic “go to the cloud” and demonstrating cloud environments’ loads of different corporations’ web applications. In this part we will bring 6 scenarios presented by Weinman, each includes a brief analysis and proof of its cost and benefits.

First lets start with several assumptions and definitions:

> > > 5 Basic assumptions Pay-per-use capacity model:

  1. Paid on use – Paid for when used and not paid for when not used.
  2. No depend on time – The cost for such capacity is fixed. It does not depend on the time or use of the request.
  3. Fixed unit cost – The unit cost for on-demand or dedicated capacity does not depend on the quantity of resources requested (you don’t get discount for renting 100 rooms for the same time).
  4. No other costs – There are no additional relevant costs needed for the analysis.
  5. No delay – All demand served without any delay.

> > > Definitions:

D (demand): Resources demand in a specific time interval. D is characterized by mean (average) and a maximum P (peak) . T (time) is the time duration in which the demand existed {D(t) ,0<t<T}. For example the average demand A can be 5 CPU cores with a peak P demand of 20 CPU cores.

Define C (cost) to be the unit cost per unit time of fixed capacity.

Define U to be the relation between the cost of resources in the cloud (pay-per-use) and a pure dedicated IT solution.

The following six cases presented by Weinman are part of the total eight cases presented in his article “Mathematical Proof of the Inevitability of Cloud Computing”:

Case 1:   U < 1

The simplest case where utility cost less than dedicated ==> Pure pay-per-use solution costs less than a pure dedicated solution.

Proof: The cost of the pay-per-use solution is A (average) * U (premium) * c (unit cost per time) * T (time of use), A*U*c*T. The cost of a dedicated solution built to peak is P(peak of D)*c*T. Since and A<=P and U<1 ==> A*U*c*T < P*c*T

Explanation: It is intuitively understood that if the cloud is less expensive per unit per time period, then the total solution based on paying only for the demand is a less expensive one.

Case 2 :   U = 1 and A = P

The utility premium is the same as the dedicated, and demand is flat (no peak) ==> a pay-per-use solution costs is equal to dedicated solution built to peak

Proof: The cost of the pay-per-use solution is A*U*c*T. The cost of a dedicated solution built to peak is P*c*T. Since U=1 and A=P, ==> A*U*c*T = P*c*T

Explanation: If there is no variability in the demand and the cost is the same, both alternatives have the same cost. That being said, we should remember the assumptions we are under, the very narrow scenario and the fact that we are not considering financial risks.

Case 3 : U = 1 and A < P

Pure pay-per-use solution costs less ==> a pure dedicated solution.

Proof: The cost of the pay-per-use solution is A*U*c*T. The cost of a dedicated solution built to peak is P*c*T. Since U=1 and A< P, Then: A*U*c*T = P*c*T

Explanation: This is very important for the understanding of the benefits for pay-per-use: if there is an element of variability, there is a major benefit to choosing this approach. Now let’s find out what happens in the case that the utility cost is greater than the fixed utility cost.

Case 4 : 1 < U < P/A

If the utility premium is greater than 1 and it is less than the peak-to-average ratio P/A, that is, 1<U<P/A then a pure pay-per-use solution costs less than a pure dedicated solution.

Proof: The cost of the pay-per-use solution is A*U*c*T. The cost of a dedicated solution built to peak is P*c*T. Since U<P/A, Than: A*U*c*T < A*PA*c*T = P*c*T

Explanation: What this means is that the utility unit cost can be higher than a fixed solution up to a certain point and still be the right economical choice. That point is a variable of the variation of the demand. In simple terms, we save money by not possessing unused resources when the demand is low.

Case 5 : U > 1 and TpT < 1U 

Lets add some definitions to the ones above:

  • Tp (peak duration) to be the duration where the demand was at peak
  • ε to be the gap between the actual peak and the per-defined peak (that is, if the resources demand exceeds (P – ε) we’ll use the cloud for our resources).

If U stands for how much more expensive the cloud is versus a fixed solution, in this case it will be easier to look at the Inverse of U (how much the fix solution is more expensive than the cloud). This case means that the percentage duration of the peak is less than the inverse of the utility premium, than a hybrid solution costs less than a dedicated solution.

Proof: The hybrid solution consists of (P ε) internal resources and the rest, ε will be handle on-demand by pay-per-use Tp of the time. The total cost equation is: 

 Given [(P ε) * T * c]+ [ ε * Tp * c * U ] and Our assumptions were: TpT  <  1/U   ==> Tp * U < T  and [ε * Tp * c * U] < [ε * T * c] ==> combine those ==> [(P ε) * T * c]+ [ ε * Tp * c * U ] < [(P ε) * T * c]+ [ε * T * c] ==> A dedicated solution cost is: [(P ε) * T * c]+ [ε * T * c] = P * T * c

Explanation: What that means is that there might be a less expensive way than internal fixed solutions if there is some variation of demand. Obviously an optimal solution should be according to its your own characteristics of demand.

Case 6 : “Long Enough” Non-Zero Demand

Lets define:

  • The total duration of non-zero demand to be TNZ. TNZ is the sum of all periods where the demand was above zero. 
  • Define ε to be the dedicated resources.

If the utility premium is greater than the dedicated and the percentage duration of non-zero demand is greater than the inverse of the utility premium, i.e., U > 1, and TNZT > 1/U than a hybrid solution costs less than a pure pay-per-use solution.

Proof – (This proof is the mirror image of the prior one). The cost of serving this demand with utility resources is: ε * TNZ * U * c. The cost of serving the demand with dedicated resources is: ε * T * c. Since TNZT > 1/U than T < TNZ * U Than ε * T * c < ε * TNZ * U * c 

Explanation: This means that you’ll need to consider using the cloud even if it’s more expensive to satisfy a portion of the demand and the baseline of your demand you use dedicated resources.

Let’s Summarize – 

The analysis Weinman does is basic including very strict assumptions. It ignores cloud enhanced pricing options (such as AWS spot and reserved instances). It is important to add that those options still doesn’t provided by most IaaS vendors hence this should be taken in mind when selecting an IaaS vendor. Nevertheless, this important research gives us an excellent opportunity to understand the overall approach and mechanisms which affect out cloud architecture decision.

 It is the the enterprise leader’s responsibility to treat their cloud establishment as part of the organization strategy including its architecture decision. From our experience and study of this evolving trend we found that sometimes the cloud decision might be taken by the operational leader (i.e. IT manager) without any intervention of the enterprise higher management. This is totally wrong, going forward the company will find itself suffer from huge cloud expenses and issues (such as security and availability) and will need to reorganize hence reinvest and hope it is not to late. In this post we presented another option for cloud deployment when a mixture of resource allocation from within the enterprise and from the cloud might be the best economic solution. We also saw that it depends on several factors like the variation of the demand and its prediction.

This is only a sneak peek to Weinman’s complete article “Mathematical Proof of the Inevitability of Cloud Computing” . To Learn more about the above scenarios and more, we strongly suggest to read it.

Hybrid Cloudonomics: a Lecture by Joe Weinman – Part 1

Posted by Nir Peled

Joe Weinman is well known in the cloud computing community as the founder of Cloudonomics. Presenting complex simulation tools, Weinman characterizes the sometimes counterintuitive business, financial, and user experience benefits of cloud computing including its on-demand, pay-per-use and other buisness aspects. Last month I had the pleasure of participating in Weinman’s webinar. Weinman discussed several interesting points which I would like to share with you.

Weinman started by contradicting what seem to be the fundamental assumptions regarding the Cloud and its benefits. There was nothing radical about what I heard but it made me think and challenge all the things I took for granted –

1 – Cloud is a brand new technology and business model  > > >  The same business model and attributes are being applied in hotels, rental car services, etc’.

2 – Cloud encompasses services accessed over the web via browser  > > >  The cloud is a general architecture module and the Web/IP/Browser (is as important as they may  be) are far from telling the whole story. There are other types of networking technologies such as Optical Transport MPLS and VPLS that need to be leveraged to unlock the value of the cloud. You don’t necessarily need to use a browser to get services in the cloud (examples include – audio conferences, webinars, M2M etc.)

 3 – Large clouds have great economies of scale > > > Not completely true because today the large cloud providers are using the same architecture that is available for any enterprise, therefore there is no major benefit from their scale in terms of economy. However,  they do benefit from other characteristics like scalability, geographic dispersion and statistic of scale.

 4 – IT is like electricity, so all IT will move into the cloud  > > > IT is not like electricity, from the economic perspective, electricity has the benefit of the economies of scale. While IT decisions are complex and the economic decision on how much of your IT to keep in the enterprise and how much to put in the cloud is based on numerous factors such as flexibility, cost, nature of the application etc.

5 – It’s important to replace CAPEX with OPEX > > > It is not always important to do so and it very much depends on financial decisions that the company makes regarding its financial and funding activities.

6 – Cloud cost reduction will drive lower IT spending > > > Weinman mentioned the Jevons paradox effect, is the proposition that technological progress that increases the efficiency with which a resource is used tends to increase (rather than decrease) the rate of consumption of that resource.

Joe refers to the argument (from his work) that “The mathematical proof of the inevitability of cloud computing” is the economic rationale for hybrid. He demonstrated demand variability of several corporations as you can see below:

Example 1: HP.com – There is not so much variability

Example 2: Large search provider – Weekends Vs. Monday to Friday

Example 3: Tax preparation firm– Growth of the early filers and then April 15th (tax day in the USA) drop to 0 on the 16th of April.


In the second part of his lecture, Weinman demonstrated six optional cases for cloud deployment including detailed calculations for the IT environment costs. He defines a variable U which is the relation between the cost of resources in the cloud (pay-per-use) and a pure dedicated IT solution. For example if U<1, that is, the utility premium is less than the unity, a pure pay-per-use solution costs less than a pure dedicated solution. In the Part 2 we will present those six options in details and will be able to give you a great insight about your hybrid cloud plans.

Stay Tuned !


The author of this article is Nir Peled, a reporter and a contributor `I Am OnDemand` .

Nir Peled