Before anything else, preparation is the key to success.
– Alexander Graham Bell
Those of us who choose to live in the IT world are bombarded with buzzwords. We strive to keep up with the barrage of vendor speak, the newest academic research, and the latest techniques to make our systems, developers, and processes more efficient. Fortunately, what hasn’t changed in the few decades that I have been an IT practitioner is the idea of an Application. Applications run on our systems, serve our users, and sometimes talk to one another. Sure, the bits change, as do the access methods, systems calls, and operating systems. But the notion of an Application is constant, and the management of an Application by IT throughout its life is pretty well understood. The nomenclature may change, but in short it looks like this:
In this article, I’ll briefly discuss these stages, and pose a solution for maintaining operational sanity as an Application moves through the phases. As my personal background is more on the Operations front, I’ll include some thoughts from that viewpoint.
In the design phase, we consider user needs, “build vs. buy”, development, customization, acceptance testing and QA. We inherit Applications, develop them, and make sure that they align with existing IT and Security policies. Operationally, we have requirements documents and make decisions around programming languages and libraries (if this is a build) or configurations (if this is off the shelf). We might be adding a feature that was specified in another stage, or applying a patch that the vendor has suggested.
Once designed, that Application is deployed. IT makes decisions regarding operating environments and user access. Will this run on Windows or Linux — or something entirely different? Are we using physical or virtual hardware? Will this live in our datacenter, or up in a cloud? What changes need to be made to security policy and network design? Deployment may include processes related to quality assurance and user acceptance. From an operations perspective, we may be looking to see if the application runs the same as it did on the developers hardware. We may be constructing “recipes” and run books for how a repeatable installation rollout occurs.
Now that the Application has been deployed, the users enter the equation. We need to support the Application to make sure it remains accessible and stable. We incorporate terms such as uptime, training, and maintenance. Necessary emphasis is placed on service levels and performance commitments. And of course there is troubleshooting, problem identification, and repair. This is typically where fire drills occur, heroes are made and error screens are lampooned. Operationally, we talk about the end user experience, about a mean time to repair and service outages.
More mature organizations will advance Applications into an optimization phase, usually as a result of issues raised in the support stage. Is the Application running as efficiently as it could? Is it fast enough? Are the components rightsized? Are we concerned with scale, an influx of users, or holiday traffic? Are there processes in the support or deployment phases that could be automated or improved?
The cycle continues with an assessment phase. This incorporates the users needs for additional Application features, the businesses need for more visibility, and potentially a response to a compliance or regulatory audit. What changes do we need to make to facilitate these concerns? What (and how) will we communicate to our vendors, our developers, our infrastructure providers? Has this Application provided the desired return on investment (ROI)? From an operations perspective, we consider the overall cost, whether committed service levels were met, and a bigger picture of operational risk. The outcome from this phase typically leads us into the design phase and the cycle starts again.
The constant throughout this cycle is the Application, and I submit that understanding the behaviour of the Application (past, present, and future) is the key maintaining operational sanity. What I am describing is much more than monitoring. It’s gaining keen insight into both the footprint of the application in terms of resources (memory, cpu utilization) and interactions (network sockets, file usage, system calls). It’s the ability to continually capture all of the details of this footprint and making this data actionable.
AppFirst uniquely provides a method of collecting Application footprint data and normalizing it for further analysis, reporting and alerting. Stay tuned for additional blog posts that will describe how AppFirst customers are using this data to make sense of each of the Application stages.
Any comments on the definition of these stages? Want to discuss how knowing your Application footprint can assist in IT management and operational sanity? Please contact an AppFirst team member at email@example.com.