Tải bản đầy đủ
2 Quality of service ( QOS) and network performance ( NP)

2 Quality of service ( QOS) and network performance ( NP)

Tải bản đầy đủ

Quality of service (QOS) and network performance (NP)


Table 14.1 Quality framework for management of data communications network service

Service ordering,
provision and alteration

Service availability


Availability of required services,
software and features
Waiting time for a connection
Waiting time for bit rate or other
service upgrade
Geographic availability of
connections of required bit

Availability of destinations
Quality of communication
or data transfer

Access to network (connection or
data transfer ‘establishment

Data transfer phase

Connection clearance

Tangible measurement
method or control
User questionnaire
Service order waiting list
Change control procedure
Number and percentage of
customers not within the
service area (e.g., for direct
connection, secure dial-in
service or some other service,
Destinations and remote services
not reachable
Number/percentage of
connections/data transfers not
able to be established (e.g.,
destination not
unavailable/connection limit
exceeded/flow label limit
exceeded, etc.)
Percentage of misrouted packets
(e.g., due to routing table
errors, technology faults or
network topology problems)
Bit error ratio (BER) (a measure
of line quality)
Number/percentage of lost,
discarded or resent packets
(caused by poor line quality or
network congestion)
Network latency — mean and
peak network propagation
delays for end-to-end packet
transfer (a measure of network
Network load as percentage of
Number and percentage of lost
connections or sessions
(system ‘hang ups’)
Correct network ‘reset’ should
occur automatically after each
communication. A high
frequency of the need for
manual resets signals a
(continued overleaf )


Quality of service (QOS), network performance and optimisation

Table 14.1 (continued )


Service reliability

Service faults
Service availability or

Customer service and

Accuracy and speed of helpdesk
in recording problems

Technical competence of staff
and speed of fault resolution

‘Helpfulness’ of staff

Service documentation

Fairness, reliability and
accuracy of service

Probability of incorrect invoice

Tangible measurement
method or control
Number, frequency and duration
of faults
Frequency of necessary network
or software upgrades to
overcome bugs
Total period of lost service (per
user or server connection) per
month or per year. Percentage
availability. (A target
availability of 99% allows
only 3 days outage due to
faults each year)
Helpdesk time to respond (e.g.,
time to answer telephone,
elapsed time before expert is
despatched or calls back user)
Mean-time-to-repair (MTTR)
faults, resolve reported
problems and clear trouble
Percentage of trouble tickets
cleared within a given target
Percentage of trouble tickets
cleared with reason ‘fault not
found’ (a large percentage may
indicate problems not properly
diagnosed and resolved or user
misunderstanding regarding a
particular service)
The courteousness and willingness
of staff to help — measured by
Availability of documentation
about the service and
operational procedures
Percentage of disputed bills

The relationship of quality of service to network performance as defined by ITU-T is shown in
Figure 14.1. According to ITU-T definition, quality of service measurements help a telecommunications service or network provider to gauge customers’ perceptions of the service, while
network performance parameters are direct measurements of the performance of the network,
in isolation of the effects caused by human users or data terminal equipment and application
software. Thus quality of service encompasses a wider domain than network performance, so
that it is possible to have a poor overall quality of service even though the network performance

Quality of service (QOS), type of service (TOS) and class of service (COS)


Figure 14.1 ITU-T definitions of quality of service (QOS) and network performance (NP).

may be excellent. Identifying such cases is the key to deploying the right application support
staff to seek the problem rather than committing network operations staff to an attempt at
network improvement without chance of success.
The measured quality of service may differ greatly from the measured network performance values in cases where the end-to-end path traverses several different networks. But
while the quality of service is of utmost importance to the end-user, it cannot become the
only pre-occupation of network administration staff, because it does not directly reflect the
performance of their network and operations, and anyway it is very difficult to measure objectively. Quality of service is not only technically difficult to measure, it should theoretically
also be measured for each individual customer and each computer application separately. Such
difficulties were the reason for the development by ITU-T of the concept of network performance. Network performance can be more easily measured within the network, and provides
meaningful performance targets and direct network design feedback for the technicians and
network managers operating the network.
Quality of service parameters should be chosen to reflect the end-to-end communication
requirements or the performance requirements of a networked computer application (as perceived by the human user). These parameters should then be correlated with one or a number
of directly related network performance parameters, each NP parameter reflecting the performance of a subnetwork or other component part of the network, and therefore contributing to
the end-to-end quality. In this way, both quality of service and network performance problems
can be most effectively monitored and traced to their root cause.
Similar parameters may be (but are not always) used to measure both quality of service
and network performance (e.g., propagation delay, bit error ratio (BER), % congestion etc.).
Normally the measured quality of service will be found to be lower than the measured network
performance — the difference is due to the performance degradations caused by the users’ data
terminal equipment (DTE), software and the human user’s method of use.

14.3 Quality of service (QOS), type of service (TOS)
and class of service (COS)
In data protocol specifications, the term quality of service is most often used to describe the
target level of communications quality and reliability which should be achieved by a given


Quality of service (QOS), network performance and optimisation

protocol (in effect: what ITU-T calls the network per formance of the protocol). The quality
of service of a number of layer 2 and layer 3 protocols are defined in this way. Thus the
target quality of service of a datacommunications protocol or network is typically defined in
terms of:
• the minimum ‘guaranteed’ data throughput (i.e., bit rate);
• the maximum one-way packet propagation delay (often called the latency); and
• the reliability of the network or connection (e.g., the ‘variability’ of the connection quality;
the fluctuation in packet delay times or the availability of the service).
IP-suite protocols which attempt to ‘guarantee’ a given network quality-of-service do so by
prioritising packets. Higher priority packets are dealt with first. They are forwarded in preference to other lower priority packets and stand less chance of being discarded in the case
of network congestion. By means of such prioritisation, capacity within the network can be
‘reserved’ for higher priority traffic during times of network congestion, thereby ‘guaranteeing’
the minimum bit rate available for data throughput. In parallel, the preferential forwarding of
packets by intermediate routers minimises the end-to-end propagation delay or latency. The
assumption is that by minimising the delay, the variation in the delay will also be minimised.
But you should never forget that the minimum delay is not necessarily either a short delay or
an acceptable delay!
Packet prioritisation schemes usually operate by means of one or more quality of service
‘labels’ in the packet’s protocol header. A number of different ‘labels’ are used by different
protocols as listed below. (Some of these we have discovered earlier in this book):
• type of service (TOS) field (used by Internet Protocol version 4 header);
• IP precedence field (used by Internet Protocol version 4 header);
• T (throughput), D (delay) and R (reliability bits (used by Internet Protocol version 4 header);
• DiffServ field as used by Internet protocol differentiated services (DiffServ);
• traffic class field (used by Internet Protocol version 6 (IPv6) for differentiated services);
• flow label (used by Internet Protocol version 6 and MPLS, multiprotocol label switching);
• user priority field (defined by IEEE 802.1p for prioritisation of traffic between virtual
bridged LANs, VLANs).

Type of service (TOS)
Internet protocol (IP) version 4 (IPv4), as we saw in Chapter 5, can be made to prioritise
packets according to their type-of-service (TOS) as recorded in the IP header type of service
field (Figure 14.2). The IP precedence value within the TOS-field is a 3-bit value indicating
the priority with which routers are to deal with and forward so-labelled packets. As listed in
Table 14.2, the highest priority is given to ‘network control’ packets, thereby ensuring that
packets concerned with network reconfiguration and control have a high chance of getting
through even during periods of very severe network congestion. The message, after all, might
be critical to relieving the congestion. The lowest priority of packets (and most likely to be
discarded at times of congestion) are ‘routine’ packets. Alternative, but less well defined, are
packet prioritisations to achieve a target quality of service with respect to throughput (T-bit),
delay (D-bit) or reliability (R-bit). The bit value settings of T-, D- and R-bits are listed in