Desktop Computer
BRIEF OVERVIEW
Before emergence of the commercial Internet in 1990s, companies accomplished much now achieved through public Internetworks by using proprietary technologies installed and managed inside each firm. This approach was expensive and unsatisfactory.
To reach business partners and customers, every company had to develop its own communication infrastructure, a process that led to massive duplication in infrastructure investment. Often the multiplicity of technologies desktop computer confused and confounded the partners and customers businesses wanted to reach.
The technologies did not interoperate well. Many companies maintained complex software programs that had no purpose except to serve as a bridge between other incompatible systems.
Reliance on proprietary technologies meant that companies were locked in to specific vendor technologies. Once locked in, firms had little bargaining power and were at the mercy of the margin-maximizing inclinations of their desktop computer providers.
The new approaches compare favorably and in many cases enhance previous approaches in numerous ways. For example:
* Companies can share a communication infrastructure common to all business partners and customers. This seamless interaction dramatically reduces complexity and confusion.
* With the help of the open Transmission Control Protocol/ Internet Protocol (TCP/IP) standard, communication technologies interoperate very well. Software that bridges systems is simple, standardized, and inexpensive.
* Companies are much less locked in to specific vendor technologies, a fact that creates more competition among vendors. More competition leads to lower prices and better-performing technology.
Companies can combine desktop computer from numerous vendors and expect them to interconnect seamlessly.
NEW SERVICE MODELS
Since the emergence of PC and client-server computing, end-user software has been designed to execute on PCs or on servers that are housed locally. Saved work usually remains on a PCs hard drive or on storage devices connected to a nearby server or mainframe. In this scenario, when the software malfunctions, the user contacts his or her IT department, which owns and operates most of the IT infrastructure. Software is designed to operate in geographically distant facilities that belong to specialize service providers, each of which deliver software services across the Internet to many different customers.
THIRD PARTY SERVICES
The benefits of increments of outsourcing include the following:
* Managing the shortage of skilled IT workers
* Reduced time to market
* The shift to 24 x 7 operations
* Favorable cash flow profiles
* Cost reduction in IT service chains (no need to maintain hardware, lower cost thanks to vendors economies of scale)
* Making applications globally accessible
MANAGING RISK THROUGH INCREMENTAL OUTSOURCING
Increments of outsourcing offers new and attractive choices to managers seeking to improve IT infrastructure. In the past, managers often felt they faced two equally unpleasant choices:
* Do nothing and risk slipping behind competitors
* Wholesale replacement of major components of computing infrastructure, which risks huge cost overruns and potential business disruptions as consequences of an implementation failure
Decisions to replace wholesale legacy networks with TCP/IP-based networks have run this second risk, as have decisions about whether to implement enterprise systems.
With the TCP/IP networks installed today, managers have intermediate options that lie between all-or-nothing choices.
|
INTERNAL VS. EXTERNAL OUTSOURCING
IT services that are unique to a company and provide it with significant advantages over competitors tend not to be outsourced. The only exception is when they are unable to develop a desktop computer vital capability internally & so, rely on outsourcing to acquire the capability.
The Hosting Service Provider Industry
Proponents of service provider-based infrastructures describe a world in which companies routinely obtain a majority of the IT functionality needed for day-to-day business from over-the-Net service chains.
Incremental Service Levels in Hosting
Hosting models can be categorized along service level lines as:
* Co-location hosting
* Shared hosting
* Dedicated hosting (Simple, Complex, Custom)
MANAGING RELATIONSHIPS WITH SERVICE PROVIDERS
When they acquire IT services externally, companies inevitably find themselves engaged in relationships with a growing number of service provides.
Choosing reliable services providers and managing strong vendor relationships are critical skills for an IT manager.
Selecting Service Provider Partners
The most critical step in assembling an IT service chain desktop computer is the selection of providers. The most common process for selecting service providers involves writing a request for proposal (RFP) and submitting it to a set of apparently qualified vendors. Typically, RFPs request information in the following categories:
Descriptive information
How it describes its business reveals much about a service providers priorities and future direction.
Financial information
A service provider*s financial strength is a critical factor in evaluating the continuity of service and service quality a vendor is likely to provide.
Proposed plan for meeting service requirements
How the provider offers to meet the requirements laid out in the RFP indicates whether it truly understands the requirements.
Mitigation of critical risks
A good RFP asks specific questions about potential service risks. Availability and security are two areas for customers to be sure they understand a service provider*s approach.
Service guarantees
A service provider*s guarantees (levels of performance it is willing to back with penalty clauses in a contract) are important signals of the real level of confidence vendor managers have in their services.
Pricing
Pricing usually includes one-time and variable components and may be structured in other ways as well.
RELATIONSHIP MANAGEMENT
Relationships with service provider partners require ongoing attention. Processes must be in place so that partners can share information and problems in the service chain can be solved quickly. The most formidable obstacles are sometimes not technical but *political.* A service-level agreement (SLA) is the prevalent contractual tool used to align incentives in relationships with service providers. It describes the specific conditions by which the service provider is held liable for a service interruption & has to pay penalties for the same.
MANAGING LEGACY SYSTEMS
Legacy Systems are old systems that organizations decide to continue to use because the investment in a new system would not justify the improved features or because the old systems have some advantages that cannot be obtained from newer systems.
Legacy systems pose the following challenges:
* Internetworking and compatibility of both hardware and software
* Different data definition
* Compounded by the drive toward enterprise application integration
* Need for dedicated and expensive maintenance
The difficulties that arise from legacy systems can be categorized as:
Technology problems
Sometimes the constraints embedded in legacy systems result from inherent incompatibilities in older technologies.
Residual process complexity
Some difficulties with legacy systems arise because the systems address problems that no longer exist.
Local Adaptation
Many legacy systems were developed for very focused business purposes within functional hierarchies.
Nonstandard data definitions
Throughout most companies, business units and divisions have used different conventions for important data elements.
MANAGING IT INFRASTRUCTURE ASSETS
In the mainframe era, keeping track of the assets that made up a companys IT infrastructure was relatively easy. The majority consisted of a small number of large mainframe machines in the corporate date center. After the emergence of PCs, clients and servers, the Web, portable devices, and distributed network infrastructure, a companys investments in IT became much more diffuse. Computing assets were scattered in a large number of small machines located in different buildings. Some moved around with their users and left the companys premises on a regular basis. The variety of asset configurations in modern IT infrastructures makes certain business questions hard to answer:
* How is IT investments deployed across business lines/units*
* How is IT assets being used*
* Are they being used efficiently*
* Are they deployed to maximum business advantage*
* How can we adjust their deployment to create more value*
One approach to this problem is called total cost of ownership (TCO) analysis.
IT services are analyzed in terms of costs and benefits associated with service delivery to each client device. Cost and benefit analysis for IT assets and platforms provides a basis for evaluating a companys current IT services against new service alternatives. Outsourcing vendors often are asked to bid on a per platform basis. These prices can be compared to study results to evaluate a companys options and identify incremental opportunities for service deliver improvement.