Senior Developers, Network Admins, Virtualization Architect, Security folks, and more – in the world of skilled labor in IT there sometimes seem to be more common boxes we like to place people in than most other fields. These boxes exist partly due to the fact that the CFO/HR/recruiting folks need nicely written job descriptions to map resources and maybe a little bit because of how people assume these position – through the fires of a limited education systems and bootstrap-yanking from the bottom. Regardless of why, we organize skilled people into buckets in much the same way everyone else is world is either an accountant, attorney, marketing expert, business analyst, or any specialty therein.
Self-reflection is something I don’t do very often. I tend to focus on the next impossible goal and not look back. But, these last few weeks I spent a lot of my time looking back. I have been thinking about what a great run I have had so far here at EMC. If you would have gone back in time and told me (relevant post : Fear and Atmosphere) that I was going to do some of the things I did in the last 1 ¾ years I would have thought you were crazy. To be honest, when I first joined the vSpecialist team I was scared to death I would fail. I was so far outside my comfort zone going to work for a pre-sales organization with a major vendor that I didn’t have a really good idea what my job would be exactly and if I could do it. Fast forward to now and I am sitting here typing on my Mac and thinking of all the impact that the vSpecialist organization has enabled for me. I have been able to move the ball with vAppliance and Virtual Storage hackery. I made a rap video extolling the tenacity of my group (which was watched by the CEO). I spoke at my first conference session (VMworld). I was able to help do things at both EMCWorld conferences that were never done before (VPLEX demo /Labs). And I released a ton of free tools that helped enable my community. [...]
I am very prone to drifting off into thoughts about patterns in real life and how they correlate to things I deal with in my work life. I am fascinated by the thought of the constantly blurring line between ourselves and technology. It is really amazing to think about how social and mobile technologies have changed they way we work, communicate, and relax. I am just as guilty as the next guy of constantly tweeting during my vacation, contacting someone through Facebook only, or the fact that I have not written an actual letter in over ten years. It is out of this day-dreaming that I often start thinking about current cloud designs and how I would change them. In my mind both public and private cloud have several core demands that have been around for a while and are an essential part of expectations in any computing utility. A simple list of these would be things like being cost effective, performant, reliable, secure, and scalable. I could spend a large amount of time defining the rules about what makes a good “cloud”. But instead I will move forward with the assumption that a cloud service provides the same or better relative utility while being cost effective to the consumer. You can find a great many blogs and personalities out there that do a much better job of defining a robust cloud service offering. My thoughts our more focus on how that actually happens.
I had the privilege a little over a week ago of being a guest on the The Cloudcast (.net) podcast that is hosted by Aaron Delp and Brian Gracely. This is my first podcast appearance and was a real honor to be a part of it. I was interviewed on my past experience with Operations and Development processes alignment and the DevOps moment in IT today (see my post on my experiences here). I also was asked about the trend in skillsets within the Infrastructure world. Brian and Aaron have really put together a nice format for the show and go here to listen to my episode and more. .nick
Check out that title. Pretty awesome way to sound smart, right? Well this blog post is another one of my long winded ones and concerns my recent 6 week side-project. So a little warning in advance: This is a long read and a minder-bender in spots. Have a hot or cool drink and some time before you start. I think you will enjoy the ending. The Idea I am a firm believer that virtualization and cloud computing are creating new paradigms to approach innovation, operation, and execution within information technology. I find myself inspired by ideas and concepts that would be impossible before the advent of virtualization as a common approach to logical abstraction of x86 compute, storage, and networking. In my feeble mind, I see endless possibilities not only in automation. I also see possibilities in creating intelligent systems; able to respond in way much more organic that we may have thought possible. It is from this belief that this new idea came to me. The lifecycle of applications and infrastructure has been both very a manual and managed process. Creation, changes, and death (decommissioning) are all things that can be automated; but require prerequisite knowledge to orchestrate correctly. You would specifically know the quantity, scope, and configuration of physical or virtual servers prior to building for an application. Likewise, configured settings and metadata for the application would have been tested and discovered through intense integration and regression cycles by development/quality teams beforehand. All of this would be wrapped [...]
This post comes out of a slide deck I authored last week for a partner event. I decided I was going to try and illustrate why the VCE model really is such a different approach to other datacenter and private cloud models. Normally my blog is light on vendor specific commentary. I see myself more as a virtualization geek who just happens to work for an awesome company (EMC) than a hardcore analysis/blogger. But I have seen so much messaging lately that distorts the VCE message, I really felt the need to offer my own perspective.