Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Bandwidth Allocation for Video under Quality of Service Constraints
Bandwidth Allocation for Video under Quality of Service Constraints
Bandwidth Allocation for Video under Quality of Service Constraints
Ebook192 pages1 hour

Bandwidth Allocation for Video under Quality of Service Constraints

Rating: 0 out of 5 stars

()

Read preview

About this ebook

We present queueing-based algorithms to calculate the bandwidth required for a video stream so that the three main Quality of Service constraints, i.e., end-to-end delay, jitter and packet loss, are ensured.

Conversational and streaming video-based applications are becoming a major part of the everyday Internet usage. The quality of these applications (QoS), as experienced by the user, depends on three main metrics of the underlying network, namely, end-to-end delay, jitter and packet loss. These metrics are, in turn, directly related to the capacity of the links that the video traffic traverses from its source to destination. The main problem that this book addresses is how much bandwidth we should allocate on the path from source to destination of a video traffic flow such that the end-to-end delay, jitter and packet loss of the video packets are within some expected required bounds.

LanguageEnglish
PublisherWiley
Release dateDec 9, 2014
ISBN9781119073154
Bandwidth Allocation for Video under Quality of Service Constraints

Related to Bandwidth Allocation for Video under Quality of Service Constraints

Related ebooks

Telecommunications For You

View More

Related articles

Reviews for Bandwidth Allocation for Video under Quality of Service Constraints

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Bandwidth Allocation for Video under Quality of Service Constraints - Bushra Anjum

    Introduction

    There are three main players with high stakes in the designing and provisioning of the Internet Protocol (IP) based Internet. They are the sender, the receiver and the Internet service provider (ISP), also known as the carrier. These players form the famous trichotomous quality of service (QoS) triangle. The sender wants to send any type of traffic at any time (high load, high burstiness), whereas the receiver expects to receive all this sent traffic intact, with low delay (reduced delay, jitter and packet loss). In addition, the third player, the carrier, wants to use the minimum possible network capacity per customer (be it sender or receiver) so that it can take more customers on its network, resulting in higher profits. This QoS triangle is shown in Figure I.1.

    Figure I.1. QoS triangle

    So as can be concluded, if any one of the three parties gives in, the other two can easily be satisfied. But it is the competing expectations and demands of the three parties that make QoS design and provisioning such an interesting and ongoing research problem.

    Let us formally define QoS: it refers to the capability of a network to provide service differentiation for different types of traffic. The primary goal of QoS is to provide priority including dedicated bandwidth, controlled jitter and latency and improved loss characteristics. Alternatively, we can also say that QoS represents the set of techniques necessary to manage network bandwidth, delay, jitter and packet loss. Another important term that will be used shortly is a network stream or a flow. A flow can be defined in a number of ways. One common way refers to a combination of source and destination addresses, source and destination socket numbers and the session identifiers. It can also be defined more broadly as any packet from a certain application or from an incoming interface.

    In the following section, we review various schemes and concepts that have been standardized by the Internet Engineering Task Force (IETF) so as to provide QoS in an IP network.

    I.1. QoS evolution in the IP network

    In this section, we trace the journey of the gradual QoS introduction in the IP-based networks, from being non-existent to becoming the prime consideration of network allocation and proportioning. The IP-based Internet was not designed originally to provide QoS guarantees. This is because the original Internet applications (e.g. email and File Transfer Protocol (FTP)) were data oriented and did not need stringent guarantees. However, in the new era marked by the growing interest in providing voice and video services over IP networks, this situation is rapidly changing. This trend is paralleled by a phenomenal growth of the World Wide Web, where voice and video were further integrated into the design of Web pages, and more recently by the introduction of long-term evolution (LTE), which is commonly known as 4G. This is a standard for wireless communications of mobile phones, which is based entirely on the IP network. Over the years, IETF has developed various protocols and architectures for supporting multimedia traffic over IP.

    This is also the right time to introduce the concept of statistical multiplexing. Statistical multiplexing is a mechanism for reducing bandwidth requirements in particular for bursty traffic sources. It has been used in the Internet to improve the network utilization, but without providing any performance guarantees. Statistical multiplexing is a spatial aggregation mechanism by which several individual streams are asynchronously superposed and transported over the same channel. As a result of the superposition, the resultant traffic shows less variability (smoother bit rate behavior) than each of the original streams. Thus, the bandwidth allocated to the aggregate traffic results in a reduction in the per stream allocated bandwidth, where the reduction is proportional to the burstiness of the multiplexed sources.

    I.1.1. Real Time Protocol (RTP)

    RTP, first published by the Audio-Video Transport Working Group of the IETF in 1996 [SCH 96], was the first formal effort to support end-to-end, real-time transfer of stream data over the IP network. RTP is a session layer protocol, runs on top of User Datagram Protocol (UDP) and is thus transparent to network routers. This is an important distinction with later technologies and architectures where routers play an important role in providing QoS differentiation.

    RTP provides several functions useful for real-time communications, including sequence numbers, timestamps, payload type identification, etc. The RTP standard defines a pair of protocols, RTP and Real Time Control Protocol (RTCP). RTP is used for transfer of multimedia data and the RTCP is used to periodically send control information and QoS parameters. RTP relies on RTCP to convey various types of information, including the number of transmitted packets and the number of lost packets. This information can be used by the sender to adjust the compression parameters and reduce the bit rate, if necessary.

    RTP is one of the technical foundations of Voice over IP (VoIP) and now is used in conjunction with signaling protocols, such as the Session Initiation Protocol (SIP) and the IP Multimedia Subsystem (IMS), which are used to set up multimedia connections across the network.

    I.1.2. Integrated Services (IntServ)

    This is the first architecture, proposed by IETF in the 1990s [BRA 94] that can provide QoS guarantees over the Internet. The basic idea of IntServ is that every router in the system implements IntServ, and every application that requires any kind of guarantees has to make an individual, per flow, reservation.

    Specifically, prior to the transmission of the packets of a new flow, resources have to be reserved on each router along the flow’s path. This is done using the Resource Reservation Protocol (RSVP) [BRA 97], which signals the routers with the requirements (bandwidth and delay) of a particular flow. Each router decides whether it has the required bandwidth, and if all the routers along the path of the flow agree to carry the flow, then a connection is established and transmission of the packets can follow.

    The routers through which a flow traverses keep a soft state of the connection, which has to be refreshed periodically. As a result, IntServ works well on a small scale, but at the scale of the Internet, it imposes a large overhead due to the periodic refreshing messages. In view of this, IntServ was never deployed widely. Its signaling protocol RSVP, however, was reused successfully in the multiprotocol label switching (MPLS) architecture, which has been discussed later in this section.

    I.1.3. Differentiated Services (DiffServ)

    DiffServ provides a coarse-grained QoS system, in contrast with IntServ’s fine-grained control system. DiffServ [BLA 98] was proposed by IETF in late 1990s with scalability as the main goal. In contrast to IntServ, DiffServ does not identify each traffic flow separately, and consequently it does not allocate bandwidth for each flow. Also, unlike IntServ where flows are provisioned dynamically using RSVP, DiffServ is provisioned statically and there is no reservation protocol.

    In DiffServ, packets are marked and classified to different QoS classes in the ingress of the network. The marking of a packet along with a dropping priority (applicable to some of the QoS classes) is carried in the type of service (ToS) field in the IP header. DiffServ-aware routers maintain a set of queues at each output port, one per DiffServ class. Packets in these queues are transmitted out of the router according to a scheduler designed to satisfy the QoS associated with each class.

    DiffServ is aimed at overcoming the scalability problem of IntServ. The advantage stands in the aggregation of many flows into a single traffic class, whose packets are then forwarded in the same way in a router. This permits DiffServ to scale up to large-size networks. However, the drawback is that per flow service cannot be guaranteed. So, while IntServ has inherent scalability problems, DiffServ does not provide explicit means for a flow to request a specific service level from the network. Also, setting up the various classes throughout the network requires knowledge of the applications and traffic statistics for aggregates of traffic. DiffServ is widely used in the Internet, with or without MPLS.

    I.1.4. Multiprotocol Label Switching (MPLS)

    MPLS was introduced in approximately 2000 [ROS 01, LEF 02], and it is currently used extensively in IP networks in conjunction with DiffServ to provide QoS guarantees for real-time multimedia services, such as voice and video calls, and also to interconnect clouds. MPLS is an architecture that turns the IP network into a connection-oriented network. In such a network, the transmission of packets between two users is done over a connection that is established prior to the transmission. This is not a logical connection, such as those established at the TCP level. Rather, a path between the two users is first calculated through the IP network, and then each router along the path reserves bandwidth for this connection on its outgoing link to the next router on the path. It also generates and stores state information regarding the connection. After the connection has been set up end-to-end, the transmission of packets can start. All the packets related to a connection follow the same path through the IP network and they arrive at the destination in the order in which they were transmitted. When the transmission of the packets is completed, the connection is torn down and the bandwidth reserved on each link for the connection is freed.

    In MPLS, a connection is known as a label switched path (LSP). LSPs can be established dynamically using RSVP-TE, an extension of RSVP, or they can be set up

    Enjoying the preview?
    Page 1 of 1