The Cloud in HP’s Cloud (Part 2): HP Discover, the Enterprise and AWS Cloud

imageLast month I attended HP Discover (disclosure: my participation was funded by Ivy World). The IT war already started however HP stands still not taking initiatives and real risks as true leaders should take. At the three-day conference I learned why some companies don’t last and why this IT giant is at a great risk of losing in this new era IT battle. This is a story of a lasting company that might have already lost.

> > > HP’s Washes the Cloud

Continue reading

Cloud Security Management – Overview and Challenges

What’s your first priority cloud security concern ?

From an attacker’s perspective, cloud providers aggregate access to many victims’ data into a single point of entry. As the cloud environments become more and more popular, they will increasingly become the focus of attacks. Some organizations think that liability can be outsourced, but no, it cannot! This presentation will answer questions such as what are the key security challenges for new cloud comers. What are the options and how you can start with a safe cloud deployment?

My presentation includes the followings and more:

  • The different Cloud security aspects
  • The cloud vendor versus the cloud customer – the responsibility perception
  • How Newvem helps its customers to avoid AWS cloud security vulnerabilities leveraging eco-system of cloud vendors.

Newvem partnered IGT Cloud meetups and opened a cloud management forum conferences. These conferences focus is on the key aspects of cloud management such as cost, security, compliance and more. Each meetup includes different lectures and include real case studies. All the sessions are recorded and published on a mutual videos channel.

Amazon Outage: Is it a Story of a Conspiracy? – Chapter 2

In April 2011, when Amazon’s cloud s east region failed. I posted the first chapter of theAmazon Cloud Outage Conspiracy – it was already very clear that the cloud will fail again and here it is… Chapter 2

Let’s first try to understand Amazon’s explanation for this outage.

At approximately 8:44PM PDT, there was a cable fault in the high voltage Utility power distribution system. Two Utility substations that feed the impacted Availability Zone went offline, causing the entire Availability Zone to fail over to generator power. All EC2 instances and EBS volumes successfully transferred to back-up generator power.

Ok. So the AZ power failed over to generator power.

At 8:53PM PDT, one of the generators overheated and powered off because of a defective cooling fan. At this point, the EC2 instances and EBS volumes supported by this generator failed over to their secondary back-up power (which is provided by a completely separate power distribution circuit complete with additional generator capacity).

Ok. So the generator failed over to a separate power circuit.

Unfortunately, one of the breakers on this particular back-up power distribution circuit was incorrectly configured to open at too low a power threshold and opened when the load transferred to this circuit. After this circuit breaker opened at 8:57PM PDT, the affected instances and volumes were left without primary, back-up, or secondary back-up power.

Ok. So the power circuit was not configured right and the computing resources didn’t get enough power (or something like that).

> > > Did you get that?

Sounds like it might be something as simple as someone stumbling on a wire that led to all that. Anyway Quora, Heroku, Dropbox and other sites failed again due to the cloud outage and were down for hours. The power outage resulted in down time and inconsistent behavior of EC2 services including instances, EBS volumes, RDS and unresponsive API.

After about 5 hours, Amazon announced that they had managed to recover most of EBS (Elastic Block Store) volumes:

“Almost all affected EBS volumes have been brought back online. Customers should check the status of their volumes in the console. We are still seeing increased latencies and errors in registering instances with ELBs.”

Once Quora was back online, I opened the thread – What are the lessons learned from Amazon’s June 2012 us-east-1 outage? Among the great answers submitted, I want to point to a specific interesting feedback returned with regard to the fragility of the EBS volume, suggesting working with an instance store instead of EBS-backed instances. The differences between these two include costs, availability and performance considerations. It is important to learn the differences between these two options and make a smart decision on which to base your cloud environment.

> > > Education

Anyway, back to our conspiracy. In comparison to the last outage, right after this outage new Amazon AWS experts were born who spouted the cloud giant mantra with regards to its building blocks: Amazon provides the tools and resources to create a robust environment, proudly tweeting that their based AWS service didn’t fail. This proves that the April outage served Amazon well with regards to customers’ education. Though there were still some mega websites that failed again.

So, does Amazon examine if its customers improved their deployments following last year outage? Does the cloud giant continue to teach its customers using outage drills? Is that a conspiracy?

> > > Additional Revenues

The outage raised again the discussion with regards to the distinct availability (AZ) zone. Again it seems that the impacted resources on a specific AZ affected the whole AWS east region while generating API latency and inconsistencies (API errors varied from 500s to 503s to RequestLimitExceeded). High availability best practice includes backup, mirroring and distributing traffic between at least two availability zones. The impact on the region apparent hence the dependency between AZs strengthens the need to maintain cross regions or even cross clouds disaster recovery (DR) practice.

These DR practices include more computing resources and data transfer (between AZs and regions), meaning significant additional costs which apparently support the cloud giant’s revenue growth. Is that a conspiracy?

> > > Final words

The cloud giant is a leader and a guide to other IaaS as well as new PaaS players. Without a doubt – Amazon is the Cloud (for now anyway).

To clarifyI don’t think that there is any conspiracy. This is part of the learning curve of the market, including the customers and the vendors, specifically Amazon. Lots of online discussions and articles were published in the last few days explaining what happened and what the AWS cloud’s customers should learn.

No doubt that the cloud will fail again. I believe that although the customers are ultimately responsible for the high availability of their services, the AWS cloud guys should also take a step back to learn and improve – every additional outage diminishes from the cloud’s reliability as a place for all.

(Cross-posted on CloudAve)

Amazon AWS is the Cloud (for now anyway)

Every day I talk, write and comment about the “Cloud”. Every time I mention the cloud I try to make sure that I add the name of the relevant cloud operator, “Rackspace Cloud, “MS Cloud” (Azure) or “HP Cloud”. Somehow all of these cloud titles don’t right to me – it seems the only title that really works for me is the “Amazon Cloud”. In this post, I will elaborate about the competition in the IaaS market and I will explain further why I think this is so.

HP vs. Amazon AWS

Earlier this month, HP announced release of a public cloud offering based on Openstack in a public bet. Zorowar Biri Singh, SVP and GM for HP Cloud Services, admitted that HP is really late to market and he also added that:

HP believes that startups – particularly those that sell services to enterprises – will want to move off Amazon as they grow but won’t want to build their own data centers. Read more

Last year I attend the HP cloud tech day. It was amazing to see this giant fighting for its life on the IT field. It is one thing to be able to promote public cloud, but you also need to select your words carefully. Singh’s statements aren’t in line with a public cloud strategy; on the contrary, they focus on the fact that HP’s state of mind is not ready for delivering a true public cloud. Establishing a public cloud is one thing, but leading with the right strategy is what counts – trivial isn’t it?

We’re not necessarily the first place a startup is going to look for in getting going. But I can assure you we’ve also got the type global footprint and an SLA and a business-grade point of view that understands the enterprise. That’s what we’re betting on.

I strongly suggest Mr. Singh be more careful. Specifically, these types of statements remind me of Kodak –  they claimed to have a strong hold on the market, they maintained that as people shoot more digital photos eventually they will print more. On January this year the 131-year-old company filed for bankruptcy.

SAP on Amazon AWS

AWS and SAP Announced Certification of AWS Infrastructure for SAP Business All-in-One Solutions Research Study Shows Infrastructure Cost Savings of up to 69% when Running SAP Solutions on AWS Read More

Due to market demand forces, SAP was forced to find its way in the cloud. In 2007, SAP announced the launch of BusinessByDesign, its (SaaS) On-Demand initiative, with no success while their customer base drifted to companies like Salesforce and Netsuite. This month SAP finally announced that they believe in the public cloud by making an interesting supportive move and partnering with the Cloud – Amazon AWS.

Customers now have the flexibility to deploy their SAP solutions and landscapes on the scalable, on-demand AWS platform without making required long-term commitments or costly capital expenditures for their underlying infrastructure. Learn more about the offering. Read More

This SAP certification strengthens the AWS position in the enterprise (for your attention Mr. Singh). IMHO SAP made a great decision to “go with the flow” and not resist it.

Openstack vs. Eucalyptus for Amazon AWS

Openstack was initiated by Rackspace and NASA in 2010. Today this cloud open source project is supported by about 150 IT and hardware companies such as Dell and HP, which trust this platform and are investing in building their public cloud with it.

It’s maybe two or three years before OpenStack will have matured to the point where it has enough features to be useful. The challenge that everyone else has is Amazon is not only bigger than them, it’s accelerating away from them.   –Netflix cloud architect Adrian Cockcroft

In March of this year, Amazon guys published their belief in the private and hybrid cloud by announcing their signed alliance with Eucalyptus, which delivers open-source software for building an AWS compatible private cloud. In April, Eucalyptus published its $30M series C funding. Together with Amazon and SAP’s joining of forces, this accentuates the fact that Amazon AWS is very seriously about conquering a share of the enterprise IT market (again ..for your attention Mr. Singh). This week I attend IGTCloud  OpenStack 2012 summit in Tel Aviv. I was hoping to hear some news about the progress and the improvement of this platform and I found nothing that can harm the AWS princess for the next few years. OpenStack is mainly ready for vendors who wants to run into the market with a really immature and naive cloud offering. I do believe that the giant vendors’ “Openstack Consortium” will be able to present an IaaS platform, but how much time will it take? Does the open cloud platform perception accelerate its development or the other way around? Still, for now, Amazon is the only Cloud.

Microsoft and Google vs. Amazon AWS

This month Derrick Harris published his scoop on GigaOm –  “Google, Microsoft both targeting Amazon with new clouds”. I am not sure whether it is a real scoop. It is kind of obvious that both giants strive to find their place in Gartner’s Gartner Magic Quadrant report:

IaaS by Gartner

With regards to Microsoft, the concept of locking in the customer is in the company’s blood and has led the MSDN owner to present Azure with its “PaaS first” strategy.  I had several discussions with MS Azure guys last year requesting to check the “trivial” IaaS option for self-provisioning of a cloud window instance. Already back then they said that it was on their roadmap and soon to be available.

This month AWS CTO Werner Vogells promoted the enablement of RDS services for MSSQL on his blog, noting:

You can run Amazon RDS for SQL Server under two different licensing models – “License Included” and Microsoft License Mobility. Under the License Included service model, you do not need to purchase SQL Server software licenses. “License Included” pricing starts at $0.035/hour and is inclusive of SQL Server software, hardware, and Amazon RDS management capabilities.

Is that good for Microsoft? It seems that Amazon AWS is the one to finally enable Microsoft platforms as pay-per-use service that is also compatible with the on-premise MS application deployments. One can say that by supporting this new AWS feature, Microsoft actually supports the natural evolution of AWS to become a PaaS vendor, putting their own PaaS offering at risk.

IMHO, Google is a hope. The giant web vendor has the XaaS concept running in its blood, so I believe that once Google presents it IaaS offering it will be a great competitor for AWS and Openstack ways. Another great advantage of AWS over these guys, and others, is its proven “economies of scale” and pricing agility. Microsoft and Google will need to take a deep breath and invest vast amounts of money to compete with the AWS – not only to build an IaaS vendor experience  but to improve upon their pricing.

Final Words

I can go on and discuss Rackspace cloud (managed services…) or IBM smart (enterprise…) cloud. Each of these great clouds has its own degree of immaturity in comparison to the Cloud.

Last week I had quick chat with Zohar Alon, CEO at Dome9, a cloud security start-up. The new start-up implemented its service across respectable amount of cloud operators.

I asked Mr. Alon to tell me, based on his experience, whether he agrees with me about the state of the IaaS market and the immaturity of the other cloud vendors in comparison to AWS cloud. He responded:

 The foresight to include Security Groups, the inbound little firewalls that protect your instances from most network threats, was a key product decision, early on by Amazon AWS. Even today, years after Security Groups launched, other cloud providers don’t offer a viable comparable.

The cloud changed the way we consume computation and networking so we can’t (and shouldn’t be able to) call our cloud provider and ask them to “install an old-school firewall in front of my cloud”. Amazon AWS was the first to realize that, and turned what looked like a limitation of the cloud, into an advantage and a competitive differentiator! At Dome9 we work with DevOps running hundreds of instances in a multitude of regions and offer them next generation control, management and automation for their AWS security, leveraging the AWS Security Groups API.

I am sure that this basic security capability must be delivered by the cloud operator itself. Cloud company is a new perception, it is not technical – it is strategic. Amazon follows its strategy with some of cloud basic guidelines: Continuous Deployment, Fast Delivery, API first, Low level of lock in, Full visibility and honesty, and so on. When Amazon AWS started in 2006, people didn’t understand what they were doing though the company leaders understood the business potential. Without a doubt, for now anyway, the Cloud is Amazon.

(Cross-posted on CloudAve Cloud & Business Strategy)

The Cloud Lock-In (Part 1): Public IaaS is Great !

It always good to start with Wikipedia’s definition as it helps to initiate a structured discussion, here is Wiki’s definition for Lock-In:

“In economics, vendor lock-in, also known as proprietary lock-in or customer lock-in, makes a customer dependent on a vendor for products and services, unable to use another vendor without substantial switching costs. Lock-in costs which create barriers to market entry may result in antitrust action against a monopoly.” Read more on Wikipedia

Does the cloud present a major lock-in ? Does the move create substantial switching costs?

“Yes !” is the common answer I hear for those questions. In this article I will debate it basing my findings on real cloud adoption cases.

Generally in terms of cloud’s lock-in, we face the same issues as in the traditional world where the move includes re-implementation of the IT service. It involves issues such as data portability, users guidance and training, integration, etc.

“I think we’ve officially lost the war on defining the core attributes of cloud computing so that businesses and IT can make proper use of it. It’s now in the hands of marketing organizations and PR firms who, I’m sure, will take the concept on a rather wild ride over the next few years.”

The above statement I bring from David Linthicum’s article “It’s official: ‘Cloud computing’ is now meaningless”. Due to my full consent with Linthicum on that matter, I will be accurate and try to make a clear assessment of the cloud lock-in issue by relating each of the three cloud layers (i.e. IPS aaS) separately.

In this part, I will relate to the most lower layer, the IaaS lock-in.

It is a fact that IT organizations take advantage of the IaaS platforms by moving part or even all of their physical resources to the public clouds. Furthermore, ISVs move at least their test and development environments and making serious plans to move (or already moved) part of their production environment to the public clouds.

Read more about shifting legacy systems to the cloud by Ben Kepes

Discussing with a public IaaS consumers, it always come to the point where I ask “do you feel locked on your cloud vendor ?” most, if not all of the companies’ leaders claim that the public clouds’ values (on-demand, elastic, agility,ect) overcomes the lock-in impact so they are willing to compromise. As a cloud enthusiastic it is great for me to see the industry leaders’ positive approach towards moving their businesses to the cloud (again too general – any of them refer to a different layer). I do not think that the lock-in is so serious.

For sometime this claim sounded pretty reasonable to me though on second thought I find that the discussion should start from a comparison with the traditional data center “locks”. Based on this comparison I can already state that one of the major public cloud advantages is the weak lock-in, simply because you don’t buy hardware. Furthermore, companies that still use the public cloud as an hosting extension to their internal data center, don’t acquire new (long term or temporary) assets that they can’t get rid of without having a major loss. In regards to its lock-in the public cloud is great !

Another important explanation related specifically to Amazon AWS products which support SaaS scalability and operations. Smart SaaS architect will plan the cloud integration layer, so that the application logic and workflow will be strongly tied with the underlying IaaS capabilities such as on-demand resources auto provisioning.

Read more about the relationship between web developers and the cloud

For example, the web can use the cloud integration layer to get on-demand EC2 resources for a specific point when a complex calculation occurs. In a superficial glance, the fact that the cloud API used as a part of the application run-time script holds an enormous lock-in risks. I disagree and let me explain why.

As a market leader, Amazon AWS will be (already is) followed by other IaaS vendors. Those will solve the same scalability and operational issues by the same sense and logic of AWS. Basically this means an evolution of IaaS platform standards. Smart cloud integration layer will enable “plug & play” a different IaaS platform or even orchestrate several in parallel. To strengthen my point I bring as an example several cloud start-ups (solving IaaS issues such as governance, usage and security) that developed their product to solve issues for Amazon AWS consumers and seriously target support of other IaaS vendors’ platforms such as Rackspace cloud and vCloud. In regards to lock-in the public cloud is great !

The IaaS vendors in the market recognize the common lock-in drawback of moving to the cloud. Vendors such as Rackspace brings the OpenStack which is a cloud software platform, so cloud vendors can build IaaS solutions upon it. Rackspace showing off on their blog site –

OpenStack™ is a massively scalable cloud operating system, powering the world’s leading clouds. Backed by more than 50 participating organizations, OpenStack is quickly becoming the industry standard for public and private clouds. Read More

It should be noted that applications and data switching between clouds is still complex and in some cases not feasible though believing in the public cloud’s future comes with understanding of its weak lock-in and will lead to visionary and long term strategic plans.

What about the private IaaS ?

Following my on going research on what is the best cloud option (i.e public, private or hybrid), I found that outsourcing the IT environment to a private or an hybrid includes a major lock-in. Implementation of a private or an hybrid cloud includes lots of customization, hence lack of standards. Private and Hybrid clouds have their benefits though lock-in is not one of them. The contract with the vendor is for 3 to 5 years at least (a data center’s typical depreciation period) on a non standard environment leads to an extreme, long term lock-in in terms of the “on-demand world”.

In order to decrease lock-in the IaaS consumer must prove the organization need for a private cloud by planning strategically for long term. Besides the ordinary due diligence to prove the vendor strength, the contract must include termination points and creative ideas that can weaken the lock-in. For example renewal of initial contract under re-assessing of the service standards, costs and terms in comparison with the cloud market, including the public one. The private cloud vendor must prove on-going efficiency improvements and costs reductions accordingly.

In his article Keep the ‘Cloud’ User in Charge”, Mark Bohannon, VP at Red Hat, Warns:

by vendors to lock in their customers to particular cloud architecture and non-portable solutions, and heavy reliance on proprietary APIs. Lock-in drives costs higher and undermines the savings that can be achieved through technical efficiency. If not carefully managed, we risk taking steps backwards, even going toward replicating the 1980s, where users were heavily tied technologically and financially into one IT framework and were stuck there.”

Some of the private cloud offering today have similar characteristics as the traditional data center, to me it seems that the former comes with a stronger lock-in impacts. In case of an IT transition companies who decide to go that way should expect a considerable switching costs and long term recovery of their IT operations hence of their business.

The second part will discuss the cloud lock-in characteristics in regards to the SaaS and the PaaS layers.

Amazon Outage: Is it a story of a conspiracy?

Last week my Twitter blinked massively by news magazines and cloud blogers that reported on the extraordinary news: “The cloud computing crashed”. Amazon AWS had suffered a major outage in its US East facility. This was the worst in cloud computing’s and Amazon’s history. This failure affected major sites such as Heroku, Reddit, Foursquare, Quora and many more well-known internet services hosted on EC2. From what I read, it seems that automated processes began replicating a large number of EBS volumes, which harmed EBS performance and availability across multiple availability zones in north Virginia region.


“..However badly they’ve been affected, providers have sung Amazon’s praises in recognition of how much it’s helped them run a powerful infrastructure at lower cost and effort.” Seven lessons to learn from Amazon’s outage (ZDNet SaaS Blog)

It seems as it were the cloud itself wanted to raise its head to show its power to everyone. Could that be an Amazon marketing drill?  Following the lessons that were learned and after a week of an extensive web discussion it seems that `cloud debaters` and Amazon customers find themselves forgiving Amazon for its failure.

“It was the cloud’s shining moment, exposing the strength of cloud computing….if your systems failed in the Amazon cloud this week, it wasn’t Amazon’s fault. You either deemed an outage of this nature an acceptable risk or you failed to design for Amazon’s cloud computing model.” The AWS Outage: The Cloud’s Shining Moment (O’Reilly Media)

Cloud and IT professionals even claims that it wasn’t Amazon’s fault. They say that the customers should have been expecting the worst and make sure that their system architecture aligned with this option of failure. Taking in mind that Amazon stood with its contract and the SLA was not violated, makes me think that maybe the failure was planned so fine to make sure not to put the business at risk of lawsuits.

The expansion of the problem from one availability zone to others in the same region was not expected by Amazon customers and this fact made the debate even stronger. Due to that AWS customers will need to re-examine their architecture for disaster recovery and probably will want invest more including using additional computing resources from their current cloud provider and even from other clouds providers. This obviously will cost more…  Who do you think will pay for that? Maybe this failure was planned by the giants of the cloud? Is it the clouds’ conspiracy?

I will summarize by saying that that after reading so many articles, I think that I found the answers to questions such as – Why did this happen? What can we learn? Is there a future to cloud computing?  .. I felt tired so I decided to go with an amusing approach. Being honest, I don’t believe that there is any cloud conspiracy, specifically not Amazon’s.  I strongly believe that `On-Demand` and cloud computing are inevitable future. One of the most important benefits of cloud computing is the consolidation of applications and data that will make the global world to a real true hence for my opinion a better place.

5 Amazon AWS Application Deployment Tools

I asked: “I am looking for a tool that will let the ISV’s customers an option to enable an environment by themselves with a back office for its administrator to control the different customer accounts. For example for an e-learning environments that will also include rules that are created by the ISV’s administrator on the total hours enabled for a single formation (cluster).

Amazon AWS answered: “We have several partners and customers that built e-learning solutions on AWS, but they are all very specific to their product/internal needs. Your description may need to be developed accordingly. As AWS is fully API based, and you have SDKs available to every development environment including .Net,  most of your code will be quite generic – (similar to the way it will look on any other platform)  and will just use the specific API commands to activate the needed functions within AWS.”

As this is not the ISV core competency, Amazon answer didn’t really satisfied me and I found myself wondering around looking for tools to easily deploy applications over the Amazon AWS cloud and found about 10 tools or bit more that can mostly monitor and maintain the cloud environment. Most of those applications (tools) didn’t really supported an option to deploy a complex/robust applications and the followings are the most 5 best ones I found and can suggest – 

  1. RightScale –  Maybe the most experienced vendor to manage AWS. RightScale Cloud Management provide capabilities like design, deploy, and manage applications across multiple public or private clouds. RightScale have a strong relationships with Amazon AWS. Upon all of those the ISV need to engage with their development team and a professional services effort will be included. 
  2. Cohesiveft –  The application provide and option to build custom application stacks for virtualized infrastructure; stacks that are loosely coupled, vertically aware, and comprised of multi-sourced. I am not sure about this one as it seems to provide a complex deployment tools. http://www.cohesiveft.com
  3. Scalr – Scalr monitors all your servers for crashes, and replaces any that fail. To ensure you never lose any data, Scalr backups your data at regular intervals and uses Amazon EBS for database storage. Sclar include simple capabilities and good UI for fast applications deployment. 
  4. Scalarium – The application support LAMP applications and from the demo it looks nice and simple. I am not sure about auto-scaling though they declare that they do support that. I suggets to check their tour
  5. Enstratus – Seems to be the most usefull “sign-up & play cloud tool” that gives you “single pane of glass” to put you in full control of your cloud platforms including monitor, auto-scale, auto recovery, auto backup and SSL certificates maintenance. The system UI is friendly in relation to the other systems in this article. I suggest to try their free trial. http://enstratus.com/

Bonus applicationSimplified – “The cloud Security company” delivers a very nice product that the ISVs need to be familiar with. The application delivers identity and access management solutions. User provisioning in an extended enterprise-to-cloud architecture must provide identity synchronization. Simplified products support enterprises with universal single sign on that works across SaaS that are deployed on public and private clouds. Check their overview video to understand better. 

I find that this market is very young and lack with supporting the ISV (as the major IaaS consumer),with basic capabilities that will help easy, fast and more importantly – cost effective deployment to provision its applications.

More about this subject:

Cloud services beg for nimbler management

15 Recommended Cloud Computing Management Tools For Your Business