My 5 Enterprise Cloud Predictions for 2013

imageI believe that this is the year when the enterprise will find its way to the cloud.

The mega Internet sites and applications are the new era enterprises. These will become the role models for the traditional enterprise. IT needs remain the same with regards to scale, security, SLA, etc. However, the traditional enterprise CIO has already set the goal for next year: 100% efficiency.

The traditional CIO understands that in order to achieve that goal, IT will need to start and do cloud, make sure that IT resources are utilized right, and that his teams move fast.

Continue reading

ClickSoftware – Great Case of an AWS Cloud Adoption: Part 1, Operations

imageOver the last year I had endless conversations with companies that strive to adopt the cloud – specifically the Amazon cloud. Of those I met, I can say that ClickSoftware is one of the leading traditional ISVs that managed to adopt the cloud. The Amazon cloud is with no doubt the most advanced cloud computing facility, leading the market. In my previous job I was involved in the ClickSoftware cloud initiative, from decision making with regards to Amazon cloud all the way to taking the initial steps to educate and support the company’s different parties in providing an On-Demand SaaS offering.

ClickSoftware provides a comprehensive range of workforce management software solutions designed to help service organizations face head-on the challenges of inefficiency. With maximizing the utilization of your resources is the lifeblood of your service organization and has developed a suite of solutions and services that reach the heart of the problem.

Continue reading

Amazon Cloud and the Enterprise – Is it a love story? (Free Infographic Included)

As befitting any great online vendor, Amazon cloud product guys listen carefully to their market targets and ensure fast implementation and delivery to satisfy their needs. It is clear that Amazon cloud is eager to conquer the enterprise market, as I already mentioned in my past post, “Amazon AWS is the Cloud (for now anyway)”.

Cloud Reserved Capcity Card

Key buzzwords that I expect are being used in Amazon HQ holes are “adoption” and “migration”. In order for the AWS cloud to reel in the big enterprise fishes, the cloud giant must go with the flow. This week Amazon cloud announced “AWS Cost Allocation For Customer Bills” – As a matter of fact Amazon announced that it believes in instances’ tagging – why? in the cloud, where a single instance doesn’t count, do you need a tag? The answer is simple – enterprise customers’ requests.
.

Adoption, TCO and ROI

In the past I had an interesting discussion with a cloud oerations VP of a great known traditional ISV (independent software vendor) about how after their POC on AWS they found that the costs are not feasible, and they wanted to go back on-premises. The winds of rejection, such as “our servers are better” and “why pay so much when I already could buy these”  (someone once called these IT guys the “server huggers”) are still there. Amazon understands that and strives to fill the gap between their advanced “cloud understating” and the traditional perception of the enterprise.

This week Amazon published an important white paper – The Total Cost of (Non) Ownership of Web Applications in the Cloud . Finding it important AWS marketing guys promoted it everywhere from Werner’s (AWS famous CTO) blog  all the way to TechCrunch. The PDF write-up done  by Jinesh Varia, one of the most respected Technology Evangelists at Amazon. The article presented three cases of online site utilization, starting from a “Steady State Website” to “Spiky but Predictable”, all the way to “Uncertain and Unpredictable”. The article discusses the cost differences between on-premise and on AWS. Without a doubt, AWS is much better if only because of its on-demand elastic capacity. Besides being a great informative educational piece, the article serves as an important support guide for enterprise CIOs who wishes to prove that AWS is worth the investment and that ROI exists.

Reserved is the new Dedicated

Yesterday, Newvem cloud usage analytics published a cool infographic that reveals details behind AWS including the types of customers and their cost improvement opportunities. Check it out below (disclosure: I am the company cloud evangelist and community Chief). It is not a surprise that the enterprise customers start small with AWS on-demand instances, while suffering from major costs. Many enterprise CIOs and DevOps that use AWS are confronted with the dilemma  of whether or not to move their cloud off AWS to a private cloud, usually when they’re footprint has scaled to a high level and opportunities for cost savings from alternatives become more attractive. The only way to understand the exact balance point between on-demand and reserved capacity is by analyzing your past patterns – Newvem does exactly that and more.

It is all about your usage. For example, in order for a Costco membership purchase to make sense, you have to know how much you and your family will use for the year (for example, how much cereal your children will eat).The same principle applies here with Reserved Instances (at least for the light and medium plans). AWS cloud customers are not buying the actual instance as a dedicated server but pay upfront to get an ongoing discount point. In order for Reserved Instances to make sense, a consistent amount of usage over a 1-3 year period must be identified. Though the fact it is not a dedicated hardware the reserved instance feature can help the AWS sales guy to offer a dedicated capacity to the potential enterprise CIO.

Last Words

I believe that Amazon already has a significant toehold inside the enterprise. The AWS cloud enables innovation and makes a great difference in how IT is consumed. Enterprise changes in perception take time and AWS understands that. The cloud hype is everywhere, but at the end of the road the cloud elasticity just makes sense – not only for the small niche SaaS vendor but also and maybe even more so for the traditional enterprise. Indeed a love story!

————————————————————

Demystifying Amazon Web Services
by newvem.Check out our data visualization blog.
(Cross-posted on CloudAve)

Amazon Outage: Is it a Story of a Conspiracy? – Chapter 2

In April 2011, when Amazon’s cloud s east region failed. I posted the first chapter of theAmazon Cloud Outage Conspiracy – it was already very clear that the cloud will fail again and here it is… Chapter 2

Let’s first try to understand Amazon’s explanation for this outage.

At approximately 8:44PM PDT, there was a cable fault in the high voltage Utility power distribution system. Two Utility substations that feed the impacted Availability Zone went offline, causing the entire Availability Zone to fail over to generator power. All EC2 instances and EBS volumes successfully transferred to back-up generator power.

Ok. So the AZ power failed over to generator power.

At 8:53PM PDT, one of the generators overheated and powered off because of a defective cooling fan. At this point, the EC2 instances and EBS volumes supported by this generator failed over to their secondary back-up power (which is provided by a completely separate power distribution circuit complete with additional generator capacity).

Ok. So the generator failed over to a separate power circuit.

Unfortunately, one of the breakers on this particular back-up power distribution circuit was incorrectly configured to open at too low a power threshold and opened when the load transferred to this circuit. After this circuit breaker opened at 8:57PM PDT, the affected instances and volumes were left without primary, back-up, or secondary back-up power.

Ok. So the power circuit was not configured right and the computing resources didn’t get enough power (or something like that).

> > > Did you get that?

Sounds like it might be something as simple as someone stumbling on a wire that led to all that. Anyway Quora, Heroku, Dropbox and other sites failed again due to the cloud outage and were down for hours. The power outage resulted in down time and inconsistent behavior of EC2 services including instances, EBS volumes, RDS and unresponsive API.

After about 5 hours, Amazon announced that they had managed to recover most of EBS (Elastic Block Store) volumes:

“Almost all affected EBS volumes have been brought back online. Customers should check the status of their volumes in the console. We are still seeing increased latencies and errors in registering instances with ELBs.”

Once Quora was back online, I opened the thread – What are the lessons learned from Amazon’s June 2012 us-east-1 outage? Among the great answers submitted, I want to point to a specific interesting feedback returned with regard to the fragility of the EBS volume, suggesting working with an instance store instead of EBS-backed instances. The differences between these two include costs, availability and performance considerations. It is important to learn the differences between these two options and make a smart decision on which to base your cloud environment.

> > > Education

Anyway, back to our conspiracy. In comparison to the last outage, right after this outage new Amazon AWS experts were born who spouted the cloud giant mantra with regards to its building blocks: Amazon provides the tools and resources to create a robust environment, proudly tweeting that their based AWS service didn’t fail. This proves that the April outage served Amazon well with regards to customers’ education. Though there were still some mega websites that failed again.

So, does Amazon examine if its customers improved their deployments following last year outage? Does the cloud giant continue to teach its customers using outage drills? Is that a conspiracy?

> > > Additional Revenues

The outage raised again the discussion with regards to the distinct availability (AZ) zone. Again it seems that the impacted resources on a specific AZ affected the whole AWS east region while generating API latency and inconsistencies (API errors varied from 500s to 503s to RequestLimitExceeded). High availability best practice includes backup, mirroring and distributing traffic between at least two availability zones. The impact on the region apparent hence the dependency between AZs strengthens the need to maintain cross regions or even cross clouds disaster recovery (DR) practice.

These DR practices include more computing resources and data transfer (between AZs and regions), meaning significant additional costs which apparently support the cloud giant’s revenue growth. Is that a conspiracy?

> > > Final words

The cloud giant is a leader and a guide to other IaaS as well as new PaaS players. Without a doubt – Amazon is the Cloud (for now anyway).

To clarifyI don’t think that there is any conspiracy. This is part of the learning curve of the market, including the customers and the vendors, specifically Amazon. Lots of online discussions and articles were published in the last few days explaining what happened and what the AWS cloud’s customers should learn.

No doubt that the cloud will fail again. I believe that although the customers are ultimately responsible for the high availability of their services, the AWS cloud guys should also take a step back to learn and improve – every additional outage diminishes from the cloud’s reliability as a place for all.

(Cross-posted on CloudAve)

Amazon AWS is the Cloud (for now anyway)

Every day I talk, write and comment about the “Cloud”. Every time I mention the cloud I try to make sure that I add the name of the relevant cloud operator, “Rackspace Cloud, “MS Cloud” (Azure) or “HP Cloud”. Somehow all of these cloud titles don’t right to me – it seems the only title that really works for me is the “Amazon Cloud”. In this post, I will elaborate about the competition in the IaaS market and I will explain further why I think this is so.

HP vs. Amazon AWS

Earlier this month, HP announced release of a public cloud offering based on Openstack in a public bet. Zorowar Biri Singh, SVP and GM for HP Cloud Services, admitted that HP is really late to market and he also added that:

HP believes that startups – particularly those that sell services to enterprises – will want to move off Amazon as they grow but won’t want to build their own data centers. Read more

Last year I attend the HP cloud tech day. It was amazing to see this giant fighting for its life on the IT field. It is one thing to be able to promote public cloud, but you also need to select your words carefully. Singh’s statements aren’t in line with a public cloud strategy; on the contrary, they focus on the fact that HP’s state of mind is not ready for delivering a true public cloud. Establishing a public cloud is one thing, but leading with the right strategy is what counts – trivial isn’t it?

We’re not necessarily the first place a startup is going to look for in getting going. But I can assure you we’ve also got the type global footprint and an SLA and a business-grade point of view that understands the enterprise. That’s what we’re betting on.

I strongly suggest Mr. Singh be more careful. Specifically, these types of statements remind me of Kodak –  they claimed to have a strong hold on the market, they maintained that as people shoot more digital photos eventually they will print more. On January this year the 131-year-old company filed for bankruptcy.

SAP on Amazon AWS

AWS and SAP Announced Certification of AWS Infrastructure for SAP Business All-in-One Solutions Research Study Shows Infrastructure Cost Savings of up to 69% when Running SAP Solutions on AWS Read More

Due to market demand forces, SAP was forced to find its way in the cloud. In 2007, SAP announced the launch of BusinessByDesign, its (SaaS) On-Demand initiative, with no success while their customer base drifted to companies like Salesforce and Netsuite. This month SAP finally announced that they believe in the public cloud by making an interesting supportive move and partnering with the Cloud – Amazon AWS.

Customers now have the flexibility to deploy their SAP solutions and landscapes on the scalable, on-demand AWS platform without making required long-term commitments or costly capital expenditures for their underlying infrastructure. Learn more about the offering. Read More

This SAP certification strengthens the AWS position in the enterprise (for your attention Mr. Singh). IMHO SAP made a great decision to “go with the flow” and not resist it.

Openstack vs. Eucalyptus for Amazon AWS

Openstack was initiated by Rackspace and NASA in 2010. Today this cloud open source project is supported by about 150 IT and hardware companies such as Dell and HP, which trust this platform and are investing in building their public cloud with it.

It’s maybe two or three years before OpenStack will have matured to the point where it has enough features to be useful. The challenge that everyone else has is Amazon is not only bigger than them, it’s accelerating away from them.   –Netflix cloud architect Adrian Cockcroft

In March of this year, Amazon guys published their belief in the private and hybrid cloud by announcing their signed alliance with Eucalyptus, which delivers open-source software for building an AWS compatible private cloud. In April, Eucalyptus published its $30M series C funding. Together with Amazon and SAP’s joining of forces, this accentuates the fact that Amazon AWS is very seriously about conquering a share of the enterprise IT market (again ..for your attention Mr. Singh). This week I attend IGTCloud  OpenStack 2012 summit in Tel Aviv. I was hoping to hear some news about the progress and the improvement of this platform and I found nothing that can harm the AWS princess for the next few years. OpenStack is mainly ready for vendors who wants to run into the market with a really immature and naive cloud offering. I do believe that the giant vendors’ “Openstack Consortium” will be able to present an IaaS platform, but how much time will it take? Does the open cloud platform perception accelerate its development or the other way around? Still, for now, Amazon is the only Cloud.

Microsoft and Google vs. Amazon AWS

This month Derrick Harris published his scoop on GigaOm –  “Google, Microsoft both targeting Amazon with new clouds”. I am not sure whether it is a real scoop. It is kind of obvious that both giants strive to find their place in Gartner’s Gartner Magic Quadrant report:

IaaS by Gartner

With regards to Microsoft, the concept of locking in the customer is in the company’s blood and has led the MSDN owner to present Azure with its “PaaS first” strategy.  I had several discussions with MS Azure guys last year requesting to check the “trivial” IaaS option for self-provisioning of a cloud window instance. Already back then they said that it was on their roadmap and soon to be available.

This month AWS CTO Werner Vogells promoted the enablement of RDS services for MSSQL on his blog, noting:

You can run Amazon RDS for SQL Server under two different licensing models – “License Included” and Microsoft License Mobility. Under the License Included service model, you do not need to purchase SQL Server software licenses. “License Included” pricing starts at $0.035/hour and is inclusive of SQL Server software, hardware, and Amazon RDS management capabilities.

Is that good for Microsoft? It seems that Amazon AWS is the one to finally enable Microsoft platforms as pay-per-use service that is also compatible with the on-premise MS application deployments. One can say that by supporting this new AWS feature, Microsoft actually supports the natural evolution of AWS to become a PaaS vendor, putting their own PaaS offering at risk.

IMHO, Google is a hope. The giant web vendor has the XaaS concept running in its blood, so I believe that once Google presents it IaaS offering it will be a great competitor for AWS and Openstack ways. Another great advantage of AWS over these guys, and others, is its proven “economies of scale” and pricing agility. Microsoft and Google will need to take a deep breath and invest vast amounts of money to compete with the AWS – not only to build an IaaS vendor experience  but to improve upon their pricing.

Final Words

I can go on and discuss Rackspace cloud (managed services…) or IBM smart (enterprise…) cloud. Each of these great clouds has its own degree of immaturity in comparison to the Cloud.

Last week I had quick chat with Zohar Alon, CEO at Dome9, a cloud security start-up. The new start-up implemented its service across respectable amount of cloud operators.

I asked Mr. Alon to tell me, based on his experience, whether he agrees with me about the state of the IaaS market and the immaturity of the other cloud vendors in comparison to AWS cloud. He responded:

 The foresight to include Security Groups, the inbound little firewalls that protect your instances from most network threats, was a key product decision, early on by Amazon AWS. Even today, years after Security Groups launched, other cloud providers don’t offer a viable comparable.

The cloud changed the way we consume computation and networking so we can’t (and shouldn’t be able to) call our cloud provider and ask them to “install an old-school firewall in front of my cloud”. Amazon AWS was the first to realize that, and turned what looked like a limitation of the cloud, into an advantage and a competitive differentiator! At Dome9 we work with DevOps running hundreds of instances in a multitude of regions and offer them next generation control, management and automation for their AWS security, leveraging the AWS Security Groups API.

I am sure that this basic security capability must be delivered by the cloud operator itself. Cloud company is a new perception, it is not technical – it is strategic. Amazon follows its strategy with some of cloud basic guidelines: Continuous Deployment, Fast Delivery, API first, Low level of lock in, Full visibility and honesty, and so on. When Amazon AWS started in 2006, people didn’t understand what they were doing though the company leaders understood the business potential. Without a doubt, for now anyway, the Cloud is Amazon.

(Cross-posted on CloudAve Cloud & Business Strategy)

The Cloud Lock-In (Part 1): Public IaaS is Great !

It always good to start with Wikipedia’s definition as it helps to initiate a structured discussion, here is Wiki’s definition for Lock-In:

“In economics, vendor lock-in, also known as proprietary lock-in or customer lock-in, makes a customer dependent on a vendor for products and services, unable to use another vendor without substantial switching costs. Lock-in costs which create barriers to market entry may result in antitrust action against a monopoly.” Read more on Wikipedia

Does the cloud present a major lock-in ? Does the move create substantial switching costs?

“Yes !” is the common answer I hear for those questions. In this article I will debate it basing my findings on real cloud adoption cases.

Generally in terms of cloud’s lock-in, we face the same issues as in the traditional world where the move includes re-implementation of the IT service. It involves issues such as data portability, users guidance and training, integration, etc.

“I think we’ve officially lost the war on defining the core attributes of cloud computing so that businesses and IT can make proper use of it. It’s now in the hands of marketing organizations and PR firms who, I’m sure, will take the concept on a rather wild ride over the next few years.”

The above statement I bring from David Linthicum’s article “It’s official: ‘Cloud computing’ is now meaningless”. Due to my full consent with Linthicum on that matter, I will be accurate and try to make a clear assessment of the cloud lock-in issue by relating each of the three cloud layers (i.e. IPS aaS) separately.

In this part, I will relate to the most lower layer, the IaaS lock-in.

It is a fact that IT organizations take advantage of the IaaS platforms by moving part or even all of their physical resources to the public clouds. Furthermore, ISVs move at least their test and development environments and making serious plans to move (or already moved) part of their production environment to the public clouds.

Read more about shifting legacy systems to the cloud by Ben Kepes

Discussing with a public IaaS consumers, it always come to the point where I ask “do you feel locked on your cloud vendor ?” most, if not all of the companies’ leaders claim that the public clouds’ values (on-demand, elastic, agility,ect) overcomes the lock-in impact so they are willing to compromise. As a cloud enthusiastic it is great for me to see the industry leaders’ positive approach towards moving their businesses to the cloud (again too general – any of them refer to a different layer). I do not think that the lock-in is so serious.

For sometime this claim sounded pretty reasonable to me though on second thought I find that the discussion should start from a comparison with the traditional data center “locks”. Based on this comparison I can already state that one of the major public cloud advantages is the weak lock-in, simply because you don’t buy hardware. Furthermore, companies that still use the public cloud as an hosting extension to their internal data center, don’t acquire new (long term or temporary) assets that they can’t get rid of without having a major loss. In regards to its lock-in the public cloud is great !

Another important explanation related specifically to Amazon AWS products which support SaaS scalability and operations. Smart SaaS architect will plan the cloud integration layer, so that the application logic and workflow will be strongly tied with the underlying IaaS capabilities such as on-demand resources auto provisioning.

Read more about the relationship between web developers and the cloud

For example, the web can use the cloud integration layer to get on-demand EC2 resources for a specific point when a complex calculation occurs. In a superficial glance, the fact that the cloud API used as a part of the application run-time script holds an enormous lock-in risks. I disagree and let me explain why.

As a market leader, Amazon AWS will be (already is) followed by other IaaS vendors. Those will solve the same scalability and operational issues by the same sense and logic of AWS. Basically this means an evolution of IaaS platform standards. Smart cloud integration layer will enable “plug & play” a different IaaS platform or even orchestrate several in parallel. To strengthen my point I bring as an example several cloud start-ups (solving IaaS issues such as governance, usage and security) that developed their product to solve issues for Amazon AWS consumers and seriously target support of other IaaS vendors’ platforms such as Rackspace cloud and vCloud. In regards to lock-in the public cloud is great !

The IaaS vendors in the market recognize the common lock-in drawback of moving to the cloud. Vendors such as Rackspace brings the OpenStack which is a cloud software platform, so cloud vendors can build IaaS solutions upon it. Rackspace showing off on their blog site –

OpenStack™ is a massively scalable cloud operating system, powering the world’s leading clouds. Backed by more than 50 participating organizations, OpenStack is quickly becoming the industry standard for public and private clouds. Read More

It should be noted that applications and data switching between clouds is still complex and in some cases not feasible though believing in the public cloud’s future comes with understanding of its weak lock-in and will lead to visionary and long term strategic plans.

What about the private IaaS ?

Following my on going research on what is the best cloud option (i.e public, private or hybrid), I found that outsourcing the IT environment to a private or an hybrid includes a major lock-in. Implementation of a private or an hybrid cloud includes lots of customization, hence lack of standards. Private and Hybrid clouds have their benefits though lock-in is not one of them. The contract with the vendor is for 3 to 5 years at least (a data center’s typical depreciation period) on a non standard environment leads to an extreme, long term lock-in in terms of the “on-demand world”.

In order to decrease lock-in the IaaS consumer must prove the organization need for a private cloud by planning strategically for long term. Besides the ordinary due diligence to prove the vendor strength, the contract must include termination points and creative ideas that can weaken the lock-in. For example renewal of initial contract under re-assessing of the service standards, costs and terms in comparison with the cloud market, including the public one. The private cloud vendor must prove on-going efficiency improvements and costs reductions accordingly.

In his article Keep the ‘Cloud’ User in Charge”, Mark Bohannon, VP at Red Hat, Warns:

by vendors to lock in their customers to particular cloud architecture and non-portable solutions, and heavy reliance on proprietary APIs. Lock-in drives costs higher and undermines the savings that can be achieved through technical efficiency. If not carefully managed, we risk taking steps backwards, even going toward replicating the 1980s, where users were heavily tied technologically and financially into one IT framework and were stuck there.”

Some of the private cloud offering today have similar characteristics as the traditional data center, to me it seems that the former comes with a stronger lock-in impacts. In case of an IT transition companies who decide to go that way should expect a considerable switching costs and long term recovery of their IT operations hence of their business.

The second part will discuss the cloud lock-in characteristics in regards to the SaaS and the PaaS layers.

The PaaS Market: Overview, Definitions, Vendors and more

> > > > >   Market Overview and Definitions 

According to Gartner’s PaaS Road Map report, cloud-based solutions will grow at a faster rate than on-premises solutions. By 2015, 50% of all ISVs will be SaaS providers. Most enterprises will hold major part of their business applications running on the cloud computing infrastructure, using PaaS and SaaS technologies directly or indirectly.

It is confusing to describe PaaS as one category as there are different values presented by the different ISVs whom developing and delivering solutions on different layers. Gartner’s report lets categorize the market of PaaS into the following 3 layers –

  1.  Application platform as a service (aPaaS) – providing a complete application platform that is used by the actual application’s components (those which support the business process) or by its APIs. Business-level power users and developers gain speed-to-market and the ability to focus on bringing their expertise to the business process layer rather than having to build the whole application infrastructure.
  2. Software infrastructure as a service (SIaaS) – those services provide management for software parts such as online cloudy database, integration and messaging. This layer is similar to the previous layer as it provides the development tools to build an application in the cloud, but it’s targeted at developers rather than business-level power user.
  3. Cloud enabled application Platform (CEAP)  – Software middle-ware tothat support the public and private cloud characteristics including monitoring, complexity management, scaling and optimization.

There’s been a veritable explosion of platform-as-a-service choices coming onto the market in the past month or two, and the pace of introductions is accelerating.

During the next two years, today’s segmented PaaS offering market will begin to consolidate into coalition of services targeting the prevailing use patterns for PaaS. Making use of such reintegrated, targeted suites will be a more attractive proposition than the burdensome traditional on-premises assembly of middleware capabilities in support of a project. By 2015, comprehensive PaaS suites will be designed to deliver a combination of all specialized forms of PaaS in one integrated offering.

> > > > >   PaaS Providers and Products —

There are several well-known PaaS providers such as GoogleApps, Heroku,  Microsoft Azure  and of course Force.com, the most mature and rich PaaS for those who want to build a classic forms-and-database SaaS application in the “old” Salesforce.com fashion.

“We don’t spend any time talking about the acronyms,” Andy Jassy, senior vice president of AWS, told eWEEK. “All those lines will get blurred over time. It’s a construct to box people in and it fits some stack paradigm. We started with raw storage, raw compute, and raw database in SimpleDB. And we’ve added load balancing, a relational database, Hadoop and Elastic Map reduce, a management GUI… All those lines start to get blurred, and you can expect to see additional abstraction from us.” Read more on eWeek

 SpringSource (by VMWare) –  Cloud Foundry, VMWare PaaS offering works with a variety of development frameworks and languages, application services and cloud deployment environments. It includes the SpringSource Framework, an enterprise Java programming model that VMware picked up in its August 2009 acquisition of SpringSource. The Spring Framework is in use by about 2 million developers worldwide as a lightweight programming environment to make applications portable across open-source and commercial application server environments. Read more on crn.com

Caspio – `Cloudy` online database platform to support online software development. One of the best features of Caspio is its “embed” feature which offers an embed code for a Caspio-based “datapage” much the same way that YouTube offers embed codes for its videos. Caspio handles blobs at the field level (in other words, there’s support for video, images, and other large binary objects) and supports SQL/API-based access to its databases. Caspio has a personal “version” that’s free but is limited to 2 data pages (essentially forms) and then starts at $40 per month for 10 datapages, 1 GB worth of data transfer and 1 GB of storage. There’s a corporate version that goes for $350 per month (more datapages, capacity, and “logins”) and several levels of subscription in-between. See how Caspio works or read more about this vendor on informationweek.com

Gigaspaces – Gigaspaces’ core product the Gigaspaces XAP is an enterprise-grade, end-to-end in-memory application server for deploying and dynamically scaling distributed applications. If an ISV or any IT organization needs to boost workload performance and has business-critical Java and .NET applications. that can be spread over a computational or data grid configuration, XAP can be a good option. GigaSpaces started as a firm that could manage a server’s local cache; it expanded to manage the combined cache of a cluster of servers, then figured out how to make that cache expandable by managing the cache as servers were added to the cluster. In its latest iteration, the GigaSpaces CEAP (Cloud Enablement Application Platform) makes application business logic elastic by managing its multiple moving parts in a shared memory system.The cloud-enabled platform allows “continuous scaling of application data and services. Think of Amazon style of SimpleDB scaling,” Nati Shalom, CTO and founder of GigaSpaces. Check out Gigaspaces.com and read the recent news brought to you by InformationWeek.com

OrangeScapeOrangeScape is one of the 10 global companies featured in Gartner’s ‘PaaS competitive landscape’ report and also has been featured in all the PaaS reports of Forrester.As an aPaaS provider, Orangescape Studio offers an UI similar to modern Excel application so the business users can design an application by capturing various aspects of the application declaratively in an XML-like format which is then executed by the proprietary Orangescape virtual machine. The core of the virtual machine is their main platform, which is nothing but a rules engine that works on a complex networked data model. Read more on CloudAve

Cordys – aPaaS vendor Delivering MashApps Cordys Process Factory (CPF) is a Web browser-based, integrated cloud environment for rapid Cloud Application Development. Cordys Process Factory allows users to use and sell Cloud Applications, and also subscribe for applications built by others in the Cloud Marketplace. All of this is achieved through visual modeling, without having to write code. Check out Cordys and read more on getApp.

There are other interesting PaaS providers such as Joyent, MuleSoft, CloudBees, Appistry and more, I will release another post on those later on this month so you are welcome to stay tuned with `I Am OnDemand`.

> > > > >   Choose Your PaaS Providers

Traditional ISV conversion to become a pure SaaS vendor should carefully plan its application deployment strategy. By learning the PaaS Market and selecting its relevant vendors in this market the traditional ISV will present a fast go-to-market and eventually a smoother conversion. Together with those benefites, I find that the ISV consideration of using a PaaS provider will make the smart ISV’s CTO to understand the strong lock-in to whichever PaaS providers the CTO will choose. This will make the CTO nervous as the lock-in feature on the On-Demand market is with no doubt more aggressive.

Check those important criteria to consider in evaluation PaaS vendor.

Learn more about PaaS vendor lock-in

To summarize I can say that no doubt that PaaS has an important part in the adoption of cloud computing by the ISVs and the IT organizations. The PaaS players are technology-rich companies, the market definitions and roles are not completely clear and it seems that PaaS evolve slower than the other two layers (i.e IaaS and PaaS). As in every evolving new market you can expect a wave of innovation and of hype as there today new business opportunities for startups companies, the leading software vendors and the IaaS giants.

Do you still have a lack of knowledge with basic market definitions? Check I Am OnDemand Terminology Page