The IT Utility Model—Part I
Providing mission-critical IT resources for a growing environment requires a significant increase in compute resource utilization. Such an increase is typically limited by capital expenditure. By implementing a well-developed utility model, you can decrease total IT costs and optimize the capital investment of IT resources.
This article is Part I of a two-part series that describes the current business requirements for a utility model, and discusses the current commercial and political issues faced when implementing one. Both financial and technical aspects are covered, from detailing what a utility model is and why it is needed, to describing the mechanism required for capturing compute resource consumption to accurately bill customers.
This article addresses the following topics:
- "Why is a Utility Model Required?"
- "What is a Utility Model?"
- "Maximizing IT Resource Utilization"
- "Integrating the Utility Model Technology"
- "Chargeback Utility Model"
- "Service Provider Utility Model"
- "Capacity and Service Level Management"
- "Software for Implementing a Utility Model"
The intended audience for this article is IT Architects, Finance staff, and Executive officers.
Why is a Utility Model Required?
The business requirements for a utility model are:
Decrease total cost of IT capital expenditure
Maximize resource utilization
Minimize resource waste
Increase the transfer of spending to current expenditure utility charges
Increase accurate accountability of resource cost against business units
This article is written from the perspective of running a utility model from within either a service provider or a data center, so certain assumptions are made regarding the required technology. However, if an alternative technology meets the business and functional requirements of the solutions defined in this article, then it is capable of being integrated into the environment.