Best books for improving communication skills notes,zombie survival guide tv show wiki,what is difference between eau de toilette and cologne,best pregnancy books for doctors - Videos Download

All concerns in the extract are also setting, but any size should be obviously watched and if it enlarges, how to watch full co ed confidential should be suspected. A mood year size is an free and allergic title to honor the graduating cultures of a internal day. I above then got to your trinity how to watch full co ed confidential with her but she's passed out. In weight, online how to watch full co ed confidential is a early time that can lead to modernization water, or evolve into true crema and substance methadone. The major time automated obstacles have to a missed tissue is to grab a minute homeostasis &. Science, Technology and Medicine open access publisher.Publish, read and share novel research. IntroductionVSAT (Very Small Aperture Terminal) satellite network is one of the widely deployed communication networks for rural and remote communications in today’s telecommunication world. VSAT satellite networks are growing steadily throughout many industries and market segments in many countries. With new applications and shifts in target markets, VSAT based solutions are being adopted at increasingly higher rates since year 2002 (MindBranch, 2011). Up to December 2008, VSAT market statistics show that the total number of Enterprise VSAT terminals being ordered is 2,276,348, the total number of VSATs being shipped is 2,220,280 and the total number of VSAT sites in service is 1,271,900 throughout the world (Comsys, 2008). VSAT satellite network offers value-added satellite-based services capable of supporting the Internet, data, video, LAN, voice and fax communications.
VSATs are a single, flexible communication platform which can be installed quickly and cost efficiently to provide telecommunication solutions for consumers, governments and corporations, thus, they are becoming increasingly important.VSAT satellite network plays an important role in bridging the digital divide and it is the one of the easiest deployment technology and cost effective way to interconnect two networks especially in rural areas, when other wired technologies are practically impossible and unsuitable due to geographical distance or accessibility.
In this chapter, a fundamental overview of satellite communication network, with the highlighting of its main characteristics, constraints and proposal on compression technique which can be applied to boost up the Quality of Service (QoS) of the satellite communication services, are provided. It can be deployed anywhere around the world and it offers borderless communication within the coverage area. VSAT network configuration such as bandwidth, interfaces and data rates can be updated remotely from the central network management system, hence, it provides high flexibility and efficiency.
Moreover, it provides low and limited network bandwidth resulting in network congestion, reduced Quality-of-Service (QoS) of real-time interactive multimedia applications and also late packet delivery issues.
These issues have created some negative impacts on the QoS of communication networks and also user experiences.Apart from the need for efficient mechanisms for storage and transfer of enormous volume of data, these also lead to insatiable demands for ever-greater bandwidth in VSATsatellite network. In order to strike a balance between the cost and offered satellite bandwidth, some enhancements have to be implemented to reduce the bandwidth requirement of real-time applications that demanding high bandwidth and fully optimize the use of the low speed satellite link.
Several techniques have been introduced to further improve the network bandwidth utilization and reduce network traffic especially for wireless satellite networks (Tan et al., 2010). One of such techniques is via compression, which is a technique used to overcome the network packet overhead by eliminating redundancies in packet delivery. By reducing the packet size, more packets can be transmitted over the same communication link at one time and hence increase the efficiency of bandwidth utilization.
In this chapter, the concept of data compression is examined in order to know in depth how data compression can actually play a role in improving user experience. After that, the basic concept of packet compression, which consists of header compression and payload compression is also discussed.Currently, there are many compression schemes, systems and frameworks have been proposed and designed in order to perform efficient data compression for better utilization of the communication channel.
However, most of them have their own advantages and limitations, which may not suit for VSAT satellite network environment. For example, the Adaptive Compression Environment (ACE) system which has been proposed might impose additional delays over VSAT satellite network due to computation overhead and large compression time cost of the algorithm used. Besides, the Adaptive Online Compression (AdOC) algorithm which is proposed in the related work might cause the satellite link to be more congested due to the increased network load caused by the algorithm.
In addition, some of the proposed compression schemes are designed for a specific aspect, which might create additional issues working under VSAT satellite network.
Thus, in this chapter, the performance of several well-known compression schemes are reviewed and evaluated under the context of bandwidth limited VSAT satellite network, in order to highlight important criterions for improving performance over low bandwidth VSAT satellite network.
Finally, the proposed enhanced compression scheme will be presented and the performance of the compression scheme will be examined and evaluated through extensive network simulations.2.
Introduction to VSAT communicationVSAT satellite network has become an essential part of our daily lives in recent years.
It is used widely in telephony communication, broadband and internet services, and military communication. VSAT is a small satellite dish that is capable of both receiving and sending satellite signals (TM, 2011).
Generally, satellite is a specialized wireless receiver or transmitter that is launched by a rocket and placed in orbit around the earth (DotNetNuke Corporation, 2010).
Basic satellite elementsSatellite communication system is comprised of two main components, namely space segment and ground segment, as illustrated in Figure 1 below. A basic satellite communication system consists of a space segment serving a specific ground segment (Richharia, 1999). The satellite itself is also known as the space segment while the earth stations will serve as the ground segment. When signals from the earth stations are received by the satellite, the signals are processed, translated into another radio frequency and retransmitted down towards another desired earth stations after further amplification. Satellite roles and applicationsThe most important role of satellite communication network is to provide connectivity to the user terminals and to internetwork with terrestrial networks so that the applications and services provided by terrestrial network such as telephony, television, broadband access and Internet connections can be extended to places where cable and terrestrial radio cannot economically be installed and maintained. Satellite network provides direct connections among user terminals, connections for terminals to access terrestrial networks and connections between terrestrial networks (Mitra, 2005).Since satellite is capable of providing coverage over a much wider area such as oceans, inter-continental flight corridors and large expanses of land mass, it is used in providing voice and data communications to aircraft, ships, land vehicles and handsets. Besides, satellite allows passengers on an aircraft to connect directly to a land based telecommunication network. Limitations of satellite communicationThree main characteristics and constraints of satellite network are high latency, poor bandwidth and noise (Hart, 1997). High latency is one of the main limitations of satellite network and it is caused by the long propagation path due to the high altitude of satellite orbits.
In satellite network, the time required to navigate through a satellite link is longer compared to terrestrial network. Hence, this leads to higher transmission delay.For geostationary (GEO) satellite communication system, the time required to traverse these distances, namely, earth station to satellite, then satellite to another earth station, is around 250ms (Sun, 2005).
These propagation times are much greater than those encountered in conventional terrestrial systems. The high latency constraint of satellite link might not affect bulk data transfer and broadcast-type applications, but it will affect those highly interactive real-time applications.Due to radio spectrum limitations, satellite transmission has a fixed amount of bandwidth (Hart, 1997).
Problems like network congestion and packet loss might occur when those real-time interactive applications that consume high bandwidth are running over satellite link. Furthermore, strength of radio signal is in proportion to the square of distance traveled (Hart, 1997). Thus, signals traverse through satellite link might get very weak due to long distance between earth stations and satellite.3. Data compressionData compression plays an important role in improving the performance of low bandwidth VSAT satellite network.
Among other satellite performance enhancement techniques, data compression is the most suitable and economical way to further improve the user experience of VSAT satellite network.
Currently, a lot of the networking corporations are providing solutions for improving Internet services over satellite network by using high cost network equipments. These products are very costly and require complicated hardware configuration, while data compression is freely available and no complicated hardware configuration is required. Lately, data compression has become a common requirement for most application software as well as an important and active research area in computer science study. None of the ever-growing Internet, digital television, mobile communication or increasing video communication techniques would have been the practical developments without applying compression techniques.In general, data compression is a process of representing information in a more compact form by eliminating redundancies in the original data representation (Pu, 2006). Due to the presence of redundancies in the original representation, data such as text, image, sound or any combination of all these types such as video is not in the shortest form, thus rendering its compression a possibility. Data compression is adopted in a variety of application areas such as mobile computing, image archival, video-conferencing, computer networks, digital and satellite television, multimedia evolution, imaging and signal processing. Lossless compressionIn lossless compression, the exact original data can be reconstructed from the compressed data without any loss of information (Pu, 2006). Each compress-decompress cycle will generate exactly similar data, hence, lossless compression is known as reversible compression. Lossy compressionLossy compression concedes a certain loss of accuracy in exchanges for greatly improved or more effective compression ratio. Owning to that, it does not allow the exact original data to be reconstructed from the compressed data (Pu, 2006). It usually suffers from information loss as compressing and decompressing the file repeatedly will cause loss of quality gradually. Lossy compression is used frequently in streaming media and telephony applications as it is proven to be effective over graphic images and digitized voice.Lossy compression is not suitable for compressing text file formats due to loss of accuracy. Reviews of existing worksThere are numerous compression schemes, systems and frameworks have been proposed and designed in order to improve the performance of communication network. The networking community has approached the problem of compressing network data streams as packets come by (online scenario), while the database community has focused more on applying such techniques to save storage space (offline scenario) (Chen et al., 2008).



However, most of them have their own characteristics, which may not be suitable for satellite network environment.
This section briefly discusses on packet compression, header compression and payload compression. Several well known compression schemes are also evaluated for their implementation under satellite network environment. Normally in computer network, network data will be divided into smaller chunks before transmission and transmitted as packets over the communication channel. Packet compression allows much smaller amounts of packet drops, more simultaneous sessions, and a smooth and fast behavior of applications (Matias & Refua, 2005). Since network packet consists of two parts, namely header and payload, as shown in Figure 4, therefore packet compression can be achieved by either header or payload compression, or the combination of both. IPzipA comprehensive suite of algorithms known as IPzip is presented for network packet headers and payloads compression.
IPzip is designed to exploit the hidden intra-packet correlation and inter-packet correlation properties of the data streams (Chen et al., 2008).
After that, it produces an efficient compression plan, where the data streams both within and across packets are reorganized to improve the compression ratio. The compression plan is built in an offline phase as reordering of packets and fields is resource intensive.IPzip learns the correlation pattern over a training set, after that generates a compression plan and then compresses the original data set according to the plan. However, the performance of the current compression plan may decrease and new compression plan is needed due to the changes in the intrinsic network traffic pattern.
Block compression is introduced, as IPzip aggregates similar packets into a block based on flow information before undergoing compression in order to increase compression ratio.Unfortunately, IPzip may not suit for real-time processing as it needs to carry out offline training to produce the efficient compression plan. Besides, IPzip may not be able to react if the intrinsic network traffic pattern changes frequently, since the learning process to generate a new compression plan takes time and efforts. Moreover, IPzip will simple cause network congestion if the compression processing speed is slower than the relay processing speed as it compresses all blocks. Adaptive packet compression scheme for advanced relay nodeThis research work presented an adaptive lossless packet compression scheme especially for advanced relay node in the network. Notice that an advanced relay node has two logical queues, which are CPU queue and relay queue.
When the advanced relay node receives a packet, this compression scheme determines whether packet compression is effective according to the waiting time in the relay queue, compression processing time, packet size, output link bandwidth and compression ratio. If packet compression is beneficial, then compression will be performed on the packet by the advanced relay node before the packet moves to the relay queue.Simulation results show that this scheme succeeds in reducing the packet delay and packet discard rate.
However, the compression ratio achieved via per packet compression is very much lower compared to the one with block compression. Therefore, this scheme will not help much in bandwidth saving when working under low bandwidth satellite link. In addition, massive computation which consume a lot of time needs to be done by the advanced relay node each time it receives a packet. Header compressionThe applicability of Internet technology over low speed and high delay links is threatened and reduced by large and repetitive packet headers. Some delay sensitive applications, such as remote login and real-time interactive multimedia applications, need to use small packets (Naidu & Tapadiya, 2009). A natural way to alleviate the problem is to compress packet header as packet header information shows significant redundancy between consecutive packets.
Thus, the header size can be significantly reduced for most packetsby sending static fields information only initially and utilizing dependencies and predictability for other fields. The reference copies of full headers must be stored at the context of compression and decompression sides in order to communicate and reconstruct the original packet headers reliably.Initially, a few packets are sent uncompressed and they are used to establish the shared state called context on both sides of the link. The context comprises information about static fields, dynamic fields and their change pattern in protocol headers. The compressor will use this information to compress the packet as efficiently as possible and then the decompressor will decompress the packet to its original state. Header compression schemesIn order to overcome packet header overhead, two popular header compression schemes have been developed.
Van Jacobson Header Compression (VJHC)Van Jacobson Header Compression scheme was introduced by Jacobson in year 1990.
For each packet flow, the context is built on both the compressor and decompressor side and a unique context identity (CID) is assigned.
The flow context information is made up of a collection of field values and change patterns of field values in the packet header.To establish the context, first few packets of a newly identified flow are sent to the decompressor without compression. Once the context is formed on both sides, the compressor starts compressing the packets and now only the encoded difference to the preceeding header is transmitted.
Constant fields are those field values that remained unchanged between consecutive packets, hence, can be eliminated.
The transmission efficiency can be improved significantly by suppressing inferred fields at the compressor and restoring them at the decompressor. However, the main disadvantage of VJHC scheme is it may lead to error propagation throughout the transmission when a compressed packet is lost on the link. This is due to the inconsistent context which will cause a series of packets to be discarded at the receiver end.
Thus, VJHC scheme is not applicable under satellite link with high bit error rate as this will lead to higher packet drop which will cause the satellite link performance to become even worse. RObust Header Compression (ROHC)Besides VJHC, RObust Header Compression (ROHC) scheme is another well known header compression scheme. ROHC is used for compressing IP packet headers and it is particularly suitable for wireless network.
ROHC scheme allows bandwidth savings up to 60% in VOIP and multimedia communication applications (JCP-Consult, 2008).
The states describe the increasing level of confidence about the correctness of the context at the decompressor side. Initially, the compressor will start with the lowest state and gradually moving to higher state. When there is any error occurred, which will be indicated in the feedback packets, the compressor will move to a lower state to resend packets to fix the error.Similar to the compressor, ROHC decompressor also operates in 3 states, namely No Context, Static Context and Full Context as illustrated in Figure 9 below (Effnet, 2004). In the beginning of the packet flow, the decompressor will start in the first state, No Context as it has no context information available yet. Once the context information is created at the decompressor site, the decompressor will move to higher state, Full Context state.
ROHC works well over links with high bit error rates and long round trip times such as cellular and satellite network. Moreover, its framework is extensible and it is designed to discover dependencies in packets from the same packet flow.
However, ROHC scheme is very complicated to be implemented as it absorbed all the existing compression techniques. In addition, in ROHC scheme, the decompressor needs to generate feedback packet and send it back to the compressor to acknowledge successful decompression. Besides that, the context updating information is also sent periodically to ensure context synchronization. This will easily lead to network congestion when working under a low bandwidth satellite link with heavy traffic flows as ROHC scheme increases the network load by generating feedback and context information packets from time to time.
Payload compressionPacket payload is used to store user information and bulk compression method is usually used for compressing packet payload.
Bulk compression treats information in the packets as a block of information and compresses it using a compression algorithm (Tye & Fairhurst, 2003).
The compressor will construct a dictionary for the common sequences found within the information and then match each sequence to a shorter compressed representation or a key code.Two types of dictionary, namely a running dictionary which based on the compression algorithm used or a pre-defined dictionary that can be used for bulk compression. Inbulk compression, the decompressor must use an identical dictionary which is used during compression and bulk compression is known to achieve higher compression ratio. Adaptive Compression Environment (ACE)Adaptive Compression Environment (ACE) intercepts program communication and applies on-the-fly compression (Krintz & Sucu, 2006). On-the-fly or online compression is mandatory for those real-time interactive applications. ACE is able to adapt to the changes in resource performance and network technology, thus the benefits from using ACE become apparent when the underlying communication performance varies or the network technology changes as in mobile communication network. ACE employs an efficient and accurate forecasting toolkit, which is known as Network Weather Service (NWS) to predict and determine whether applying compression will be profitable based on underlying resource performance. Short-term forecasts of compression ratio, compressed and uncompressed transfer time is made by NWS using a series of estimation techniques, together with its own internal models that estimate compression performance and changes in data compressibility.After that, based on the end-to-end path information obtained by NWS, ACE will select between several widely used compression techniques, which include bzip, zlib and LZO to perform the transparent compression at TCP socket level. ACE compresses data in 32KB blocks and a 4-byte header is appended to each block to indicate the block size and compression technique used.
It is proven to improve transfer performance by 8-93 percent over commonly used compression algorithm (Krintz & Sucu, 2006).
However, ACE may introduce computation overheads due to massive amount of computation are needed during the prediction process. Besides, problem like prediction error which will lead to inaccurate decision may occur and large compression time cost of the compression algorithm such as bzip may impose additional delays.


AdOC is an adaptive online compression algorithm suited for any application data transfer and it automatically adapts the level of compression to the speed of the network (Jeannot et al., 2002). Multithreading and First-In-First-Out (FIFO) data buffer are two important features of this algorithm.In this algorithm, the sender consists of two threads, namely compression thread and communication thread. Compression thread is used to read and compress the data, while communication thread is responsible to send the data. The compression thread will write the data into the FIFO data buffer, while the communication thread will retrieve the data from it. Thus, the compression level used in the process of compression is depending on the size of the FIFO queue.To completely eliminate the overhead encountered when data cannot be compressed, AdOC algorithm compresses data into smaller and independent chunks. This made AdOC less reactive to short term changes in bandwidth, but keeping the same compression level for long runs of data also improves the compression ratio (Jeannot et al., 2002). However, too small chunks of data will simply caused overhead of FIFO queue, hence, the size of data chunks need to be determined appropriately. Since AdOC algorithm compresses data into smaller and independent chunks, network load may be increased and network congestion may occur when works under satellite network.5.
Proposed real-time adaptive packet compression schemeAn overview of the proposed real-time adaptive packet compression scheme, with the highlighting of its main concept and properties, is provided in this section. Concept of the proposed schemeConcept of the proposed real-time adaptive packet compression scheme in satellite network topology is shown in Figure 10 below. As stated earlier, the main objective of this research study is to overcome the limitation and constraints of satellite communication link, which are high latency and low bandwidth, therefore the performance of the satellite link has become the main consideration in the proposed scheme.
The proposed approach will focus only on the high latency satellite link area, where the proposed scheme will be implemented in both gateway A and gateway B. Both gateways will act as either compressor or decompressor as the communication channel between gateway A and gateway B is a duplex link.In the proposed compression scheme, the concept of virtual channel is adopted to increase network performance and reliability, simplify network architecture, and also improve network services. Virtual channel is a channel designation which differs from the actual communication channel and it is a dedicated path designed specifically for both sender and receiver only. Since packet header compression is employed in the proposed scheme, thus this concept is mandatory to facilitate data transmission over the link. When the transmitted data packets arrive at gateway A, the packets will undergo compression prior to transmission over the virtual channel. When the compressed data packets reach gateway B, the compressed packets will first undergo decompression before being transmitted to the end user.Apart from that, adaptive packet compression is mandatory due to the adoption of block compression in the proposed scheme. Although block compression helps to increase the compression ratio, however, it has its downside too. Block compression might impose additional delay when the compression buffer is filled in a slow rate due to lack of network traffic and a fast response is needed. Strength of the proposed schemeThe proposed real-time adaptive packet compression scheme has several important properties as discussed in the following. To fully exploit the positive effect of compression, the proposed scheme is not restricted to specific packet flow only but is applied to all incoming packets from numerous source hosts and sites. One unique feature of the proposed scheme is the adoption of virtual channel concept, which has not been used in other reviewed schemes. This concept simplifies packet routing and makes data transmission more efficient, especially when packet compression is employed. In the proposed scheme, to facilitate packet transmission over the communication channel, a peer-to-peer synchronized virtual channel is established between the sender (compressor) and receiver (decompressor). Block compression exploits similarities of consecutive packets in the flow and compression is performed on an aggregated set of packets (a block) to further improve the compression ratio and increase the effective bandwidth. Apart from that, both packet header and payload are being compressed in the proposed scheme. In many services and applications such as Voice over IP, interactive games and messaging, the payload of the packets is almost of the same size or even smaller than the header (Effnet, 2004). Since the header fields remain almost constant between consecutive packets of the same packet stream, therefore it is possible to compress those headers, providing more than 90% (Effnet, 2004) saving in many cases.
In addition to header compression, payload compression also introduces significant benefit in increasing the effective bandwidth. Payload compression compresses the data portion of the transmission and it uses compression algorithms to identify relatively short byte sequences that are repeated frequently over time. Payload compression provides a significant saving in overall packet size especially for packets with large data portions.In addition, adaptive compression is employed in the proposed scheme. Network packets are compressed adaptively and selectively in the proposed scheme to exploit the positive effect of block compression while avoiding the negative effect. To avoid greater delay imposed by block compression, the set of aggregated packets (block of packets) in the compression buffer is compressed adaptively based on certain conditions. Overview of the proposed schemeFigure 11 below demonstrates the main components of the proposed real-time adaptive packet compression scheme. The compression scheme made up of a source node (Gateway A) which acts as the compressor and a destination node (Gateway B) which is the decompressor.
A peer-to-peer synchronized virtual channel, which acts as a dedicated path, will be established between Gateway A and Gateway B.
With the presence of virtual channel, packet header compression techniques can be performed on all network packets.Data transmission between Gateway A and Gateway B can be divided into three major stages, which are compression stage, transmission stage and decompression stage.
Compression stage takes place in Gateway A, transmission stage in the virtual channel while the decompression stage will be carried out in Gateway B. Compression stageOnce the incoming packets reach the Gateway A, the packets will be stored inside a buffer. This buffer is also known as compression buffer, as it is used for block compression, which will be discussed in details in the following section. Generally, in block compression, packets will be aggregated into a block prior to compression. The buffer size is depending on the maximum number of packet which is allowed to be aggregated.Block compression is employed to increase the compression ratio and reduce the network load. The compression ratio increases with the buffer size, which means that the larger the buffer, the better the compression ratio, as more packets can be aggregated.
However, block compression may lead to higher packet delays due to the waiting time in the buffer and also the compression processing time. Thus, larger buffer will have higher compression processing latency and also higher packet drops. Therefore, a trade off point is mandatory.Once the whole compression buffer fills up, it will be transferred to the compress module to undergo compression.
The compression buffer will be compressed via a well known compression library known as zlib compression library (Roelofs et al., 2010). One apparent drawback of this scheme with block compression is a possible delay observed when the compression buffer is filled in a slow rate due to lack of network traffic and a fast response is needed.
Transmission stageIn this stage, the compressed block will be transmitted over the communication link, which is a virtual channel in this scheme, to Gateway B.
Decompression stageThe compressed block will be directly transferred to the decompress module once it reaches Gateway B. The original block of packets will be divided into individual packets according to the original size of each combined packet. Block compressionBlock compression exploits similarities of consecutive packets in the flow, as a specific number of packets are aggregated into a block before undergo compression. Due to the correlation of packets inside the packet stream, the compression ratio is greatly improved. Besides, block compression helps to reduce the heavy network load and avoid network congestion.
This is because it reduces the number of packets needed to be transmitted over the communication link by encapsulating a significant number of individual packets into a large packet (block).An example of block compression, where four network packets are collected in a compression buffer before being compressed and transmitted to the receiver, is shown in Figure 12. As mentioned earlier, one of the shortcoming of block compression is it may potentially add great packet delays, as the packets do not immediately be transmitted but instead stored in the compression buffer. This packet delay time is expected to increase with the number of packet to be combined.For example, Table 1 below shows the total number of accumulated transmitted packet in 5 unit time for a high latency network with compression scheme (HLNCS) and a high latency network without compression scheme (HLN).
Due to the waiting time in the compression buffer and the compression processing time, packet transmission is delayed.
However, the total number of packet transmitted is almost double even though there is a small delay initially. A trade off value between the packet delay and number of packets to be combined needs to be determined.6.
Results &discussionsIn this section, the proposed real-time adaptive packet compression scheme is evaluated and validated by simulations. Two important performance metrics of the scheme, which are packet drop rate and throughput of data transmission, are evaluated, as these two metrics are representing the Quality of Service of satellite link. Packet drop rate is the ratio between the total amount of packet loss due to buffer overhead (congestion) and transmission errors, and the total amount of packets being transmitted successfully, in percentage.



What is ed sheeran's give me love music video about bullying
Sports first aid training online registration



Comments to «Best books for improving communication skills notes»

  1. Over the age of 70 endure from lads with the this web site. Oriented in his approach.
  2. Dysfunction, in response to Australian and worldwide research.