I am very prone to drifting off into thoughts about patterns in real life and how they correlate to things I deal with in my work life. I am fascinated by the thought of the constantly blurring line between ourselves and technology. It is really amazing to think about how social and mobile technologies have changed they way we work, communicate, and relax. I am just as guilty as the next guy of constantly tweeting during my vacation, contacting someone through Facebook only, or the fact that I have not written an actual letter in over ten years. It is out of this day-dreaming that I often start thinking about current cloud designs and how I would change them. In my mind both public and private cloud have several core demands that have been around for a while and are an essential part of expectations in any computing utility. A simple list of these would be things like being cost effective, performant, reliable, secure, and scalable. I could spend a large amount of time defining the rules about what makes a good “cloud”. But instead I will move forward with the assumption that a cloud service provides the same or better relative utility while being cost effective to the consumer. You can find a great many blogs and personalities out there that do a much better job of defining a robust cloud service offering. My thoughts our more focus on how that actually happens.
This post comes out of a slide deck I authored last week for a partner event. I decided I was going to try and illustrate why the VCE model really is such a different approach to other datacenter and private cloud models. Normally my blog is light on vendor specific commentary. I see myself more as a virtualization geek who just happens to work for an awesome company (EMC) than a hardcore analysis/blogger. But I have seen so much messaging lately that distorts the VCE message, I really felt the need to offer my own perspective.
At my current employer we use a custom built ETL process for building business reporting and analysis data. Originally this started as a medium-sized Dell server with a full rack of local storage. As the criticality and scale of this resource grew, it outgrew the hardware it was on. The key to this server was that the build process ran overnight and the server was accessed by multiple departments throughout the day. This left very little time for hardware maintenance. I had helped move all development environment servers to a VMware cluster months before. Using this momentum I pitched the idea of solving the criticality and scalability with a VMware-based solution. The argument was four-fold: The company wanted to avoid the licensing and hardware expense of moving to a Microsoft clustering solution. VMware HA provided resumption of services in the case of a hardware failure and hardware maintenance would not require downtime. The RTO was satisfied by an automatic HA fail over. The additional cost of VMware licenses and new hosts would be spread over future planned provisioning and actually reduce costs by introducing consolidation. After playing the part of VMware sales rep. I was able to get endorsements from the CTO, Data Services, and Executive groups. This would be the first time we would attempt to put a business critical service on a VMware platform. I was the only individual in IT at the time that had any exposure to VMware and needless to say my reputation and job [...]
InfraScrum – Agile Methodology Applied to Infrastructure Operations