cs601 assignment 4 related theory

No Comments

Digital Signal Service 0 (DS0)
Digital Signal 0 (DS0) is a basic digital signaling rate of 64 kbit/s, corresponding to the capacity of one voice-frequency-equivalent channel.[1] The DS0 rate, and its equivalents E0 and J0, form the basis for the digital multiplex transmission hierarchy in telecommunications systems used in North America, Europe, Japan, and the rest of the world, for both the early plesiochronous systems such as T-carrier and for modern synchronous systems such as SDH/SONET.
The DS0 rate was introduced to carry a single digitized voice call. For a typical phone call, the audio sound is digitized at an 8 kHz sample rate using 8-bit pulse-code modulation for each of the 8000 samples per second. This resulted in a data rate of 64 kbit/s.
Because of its fundamental role in carrying a single phone call, the DS0 rate forms the basis for the digital multiplex transmission hierarchy in telecommunications systems used in North America. To limit the number of wires required between two involved in exchanging voice calls, a system was built in which multiple DS0s are multiplexed together on higher capacity circuits. In this system, twenty-four (24) DS0s are multiplexed into a DS1 signal. Twenty-eight (28) DS1s are multiplexed into a DS3. When carried over copper wire, this is the well-known T-carrier system, with T1 and T3 corresponding to DS1 and DS3, respectively.
Besides its use for voice communications, the DS0 rate may support twenty 2.4 kbit/s channels, ten 4.8 kbit/s channels, five 9.67 kbit/s channels, one 56 kbit/s channel, or one 64 kbit/s clear channel.
E0 (standardized as ITU G.703) is the European equivalent of the North American DS0 for carrying . However, there are some subtle differences in implementation. Voice signals are encoded for carriage over E0 according to ITU G.711. Note that when a T-carrier system is used as in North America, robbed bit signaling can mean that a DS0 channel carried over that system is not an error-free bit-stream. The out-of-band signaling used in the European E-carrier system avoids this.


The United States Bell System activated the first commercial digital carrier system in 1962 in Chicago, Illinois, where electrical noise from high-tension lines and automotive ignitions interfered with analog systems. The system was designated T1, with the T standing for Terrestrial to distinguish the land transmission from satellite transmission. (Bell Laboratories also launched Telstar I, the first communications satellite, in 1962.) T-carrier was designed for a four-wire twisted-pair circuit, although the DSX-1 interface is medium-independent, i.e., can be provisioned over any of the transmission media, at least at the T1 rate of 1.544 Mbps. At the T3 rate of 44.736 Mbps, twisted pair is unsuitable over distances greater than 50 feet due to issues of signal attenuation. As the first digital carrier system,T-carrier set the standards for digital transmission and switching, including the use of pulse code modulation (PCM) for digitizing analog voice signals. (Note: T-carrier uses the µ-law (mu-law) companding technique for PCM.) T-carrier not only set the basis for the North American digital hierarchy, but also led to the development of E-carrier in Europe and J-carrier in Japan.The fundamental building block of T-carrier is a 64-kbps channel, referred to as DS-0 (Digital Signal level Zero). Through time-division multiplexing (TDM), T-carrier interleaves DS-0 channels at various signaling rates to create the services that comprise the North American digital hierarchy, as detailed in Table T-1.
Table T-1: North American Digital Hierarchy: T-carrier
Digital Signal (DS) LevelData Rate64-Kbps Channels (DS-0s)Equivalent T1s
DS-064 Kbps11/24
DS-1 (T1)1.544 Mbps241
DS-1C (T1C)3.152 Mbps482
DS-2 (T2)6.312 Mbps964
DS-3 (T3)44.736 Mbps67228
DS-4 (T4)274.176 Mbps4,032168
DS-5 (T5)400.352 Mbps5,760250


cyclic redundancy checking
Cyclic redundancy checking is a method of checking for errors in data that has been transmitted on a communications link. A sending device applies a 16- or 32-bit polynomial to a block of data that is to be transmitted and appends the resulting cyclic redundancy code (CRC) to the block. The receiving end applies the same polynomial to the data and compares its result with the result appended by the sender. If they agree, the data has been received successfully. If not, the sender can be notified to resend the block of data.
The ITU-TS (CCITT) has a standard for a 16-bit polynomial to be used to obtain the cyclic redundancy code (CRC) that is appended. IBM's Synchronous Data Link Control and other protocols use CRC-16, another 16-bit polynomial. A 16-bit cyclic redundancy code detects all single and double-bit errors and ensures detection of 99.998% of all possible errors. This level of detection assurance is considered sufficient for data transmission blocks of 4 kilobytes or less. For larger transmissions, a 32-bit CRC is used. The Ethernet and token ring local area network protocols both used a 32-bit CRC.
In Europe, CRC-4 is a multiframe system of cyclic redundancy checking that is required for switches on E-1 lines.
A less complicated but less capable error detection method is the checksum method. See modem error-correcting protocols for a list of protocols that use either of these methods.







Next PostNewer Post Previous PostOlder Post Home

0 comments

Post a Comment