Sign up for PayPal and start accepting credit card payments instantly.

Friday, March 5, 2010

SS7 Tutorial

Overview

Common Channel Signaling System No. 7 (i.e., SS7 or C7) is a global standard for telecommunications defined by the International Telecommunication Union (ITU) Telecommunication Standardization Sector (ITU-T). The standard defines the procedures and protocol by which network elements in the public switched telephone network (PSTN) exchange information over a digital signaling network to effect wireless (cellular) and wireline call setup, routing and control. The ITU definition of SS7 allows for national variants such as the American National Standards Institute (ANSI) and Bell Communications Research (Telcordia Technologies) standards used in North America and the European Telecommunications Standards Institute (ETSI) standard used in Europe.
The SS7 network and protocol are used for:
  • basic call setup, management, and tear down
  • wireless services such as personal communications services (PCS), wireless roaming, and mobile subscriber authentication
  • local number portability (LNP)
  • toll-free (800/888) and toll (900) wireline services
  • enhanced call features such as call forwarding, calling party name/number display, and three-way calling
  • efficient and secure worldwide telecommunications
  • Learn about our SS7/IP signaling products.  

Signaling Links

SS7 messages are exchanged between network elements over 56 or 64 kilobit per second (kbps) bidirectional channels called signaling links. Signaling occurs out-of-band on dedicated channels rather than in-band on voice channels. Compared to in-band signaling, out-of-band signaling provides:
  • faster call setup times (compared to in-band signaling using multi-frequency (MF) signaling tones)
  • more efficient use of voice circuits
  • support for Intelligent Network (IN) services which require signaling to network elements without voice trunks (e.g., database systems)
  • improved control over fraudulent network usage
  • Save network operating costs by reducing SS7 links.

Signaling Points

Each signaling point in the SS7 network is uniquely identified by a numeric point code. Point codes are carried in signaling messages exchanged between signaling points to identify the source and destination of each message. Each signaling point uses a routing table to select the appropriate signaling path for each message.
There are three kinds of signaling points in the SS7 network (Fig. 1):
  • SSP (Service Switching Point)
  • STP (Signal Transfer Point)
  • SCP (Service Control Point)
SS7 Signaling Points
Figure 1. SS7 Signaling Points
SSPs are switches that originate, terminate, or tandem calls. An SSP sends signaling messages to other SSPs to setup, manage, and release voice circuits required to complete a call. An SSP may also send a query message to a centralized database (an SCP) to determine how to route a call (e.g., a toll-free 1-800/888 call in North America). An SCP sends a response to the originating SSP containing the routing number(s) associated with the dialed number. An alternate routing number may be used by the SSP if the primary number is busy or the call is unanswered within a specified time. Actual call features vary from network to network and from service to service.
Network traffic between signaling points may be routed via a packet switch called an STP. An STP routes each incoming message to an outgoing signaling link based on routing information contained in the SS7 message. Because it acts as a network hub, an STP provides improved utilization of the SS7 network by eliminating the need for direct links between signaling points. An STP may perform global title translation, a procedure by which the destination signaling point is determined from digits present in the signaling message (e.g., the dialed 800 number, calling card number, or mobile subscriber identification number). An STP can also act as a "firewall" to screen SS7 messages exchanged with other networks.
Because the SS7 network is critical to call processing, SCPs and STPs are usually deployed in mated pair configurations in separate physical locations to ensure network-wide service in the event of an isolated failure. Links between signaling points are also provisioned in pairs. Traffic is shared across all links in the linkset. If one of the links fails, the signaling traffic is rerouted over another link in the linkset. The SS7 protocol provides both error correction and retransmission capabilities to allow continued service in the event of signaling point or link failures.
  •  Signaling gateways can be configured as an STP or SEP (Signaling End Point).

SS7 Signaling Link Types

Signaling links are logically organized by link type ("A" through "F") according to their use in the SS7 signaling network.
SS7 Signaling Link Types
Figure 2. SS7 Signaling Link Types
A Link: An "A" (access) link connects a signaling end point (e.g., an SCP or SSP) to an STP. Only messages originating from or destined to the signaling end point are transmitted on an "A" link.
 
B Link: A "B" (bridge) link connects an STP to another STP. Typically, a quad of "B" links interconnect peer (or primary) STPs (e.g., the STPs from one network to the STPs of another network). The distinction between a "B" link and a "D" link is rather arbitrary. For this reason, such links may be referred to as "B/D" links.
 
C Link: A "C" (cross) link connects STPs performing identical functions into a mated pair. A "C" link is used only when an STP has no other route available to a destination signaling point due to link failure(s). Note that SCPs may also be deployed in pairs to improve reliability; unlike STPs, however, mated SCPs are not interconnected by signaling links.
 
D Link: A "D" (diagonal) link connects a secondary (e.g., local or regional) STP pair to a primary (e.g., inter-network gateway) STP pair in a quad-link configuration. Secondary STPs within the same network are connected via a quad of "D" links. The distinction between a "B" link and a "D" link is rather arbitrary. For this reason, such links may be referred to as "B/D" links.
 
E Link: An "E" (extended) link connects an SSP to an alternate STP. "E" links provide an alternate signaling path if an SSP's "home" STP cannot be reached via an "A" link. "E" links are not usually provisioned unless the benefit of a marginally higher degree of reliability justifies the added expense.
 
F Link: An "F" (fully associated) link connects two signaling end points (i.e., SSPs and SCPs). "F" links are not usually used in networks with STPs. In networks without STPs, "F" links directly connect signaling points.

Bluetooth 2 - Enhanced Data Rate, EDR

Bluetooth EDR or Bluetooth 2 is an upgrade of the original Bluetooth specification. It based on the original Bluetooth standard which is well established as a wireless technology. It has found a very significant number of applications, particularly in areas such as connecting mobile or cell phones to hands-free headsets.
One of the disadvantages of the original version of Bluetooth in some applications was that the data rate was not sufficiently high, especially when compared to other wireless technologies such as 802.11. In November 2004, a new version of Bluetooth, known as Bluetooth 2 was ratified. This not only gives an enhanced data rate but also offers other improvements as well.
Of all the features included in Bluetooth 2, it is the enhanced data rate (EDR), facility that is giving rise to the most comment. In the new specification the maximum data rate is able to reach 3 Mbps, a significant increase on what was available in the previous Bluetooth specifications.

Why is Bluetooth 2 needed?

As proved particularly by the computer industry, there is always a need for increased data rates, and ever increasing capacity. With this in mind and the fact that the previous version of Bluetooth, version 1.2 allowed a maximum data rate of 1 Mbps which reflected in a real throughput of 723 kbps, the next specification should allow many new applications to be run. In turn this will open up the market for Bluetooth even more and allow further application areas to be addressed.
While speed on its own opens up more opportunities, the strategy behind Bluetooth 2 with its enhanced data rate is more deep rooted. When the Bluetooth 2 specification was released there were no applications that were in immediate need of the new enhanced data rate. For example even a high quality stereo audio stream required a maximum of only 345 kbps.
The reason is that as Bluetooth use increases, and the number of applications increase, that users will need to run several links concurrently. Not only may Bluetooth need to be used for streaming audio, but other applications such as running computer peripherals will increase. The reason becomes clearer when looking at real situations when interference is present. Typically it is found that a good margin is required to allow for re-sends and other data. Under Bluetooth 1.2, high quality stereo audio can be sent on its own within the available bandwidth and with sufficient margin. However when other applications are added there is not sufficient margin to allow for the system to operate satisfactorily. Bluetooth 2 solves this problem and enables sufficient bandwidth for a variety of links to be operated simultaneously, while still allowing for sufficient bandwidth margin within the system.
There are other advantages to running Bluetooth 2. One of the major elements is in terms of power consumption. Although the transmitter and receiver and logic need to be able to handle data at a higher speed which normally requires a higher current consumption, this is more than outweighed by the fact that they need only to remain fully active for about a third of the time. This brings significant advantages in terms of battery life, a feature that is of particular important in many of the Bluetooth applications.
Compatibility is a major requirement when any system is upgraded. The same is true for Bluetooth, and this has been a major requirement and concern when developing the Bluetooth 2 standard. The new standard is completely backward compatible and allows networks to contain a mixture of EDR (enhanced data rate) devices as well as the standard devices. A key element of this is that the new modulation schemes that have been incorporated into Bluetooth 2 are compatible in their nature with the standard rate specification. In this way the new standard will be able to operate with any mixture of devices from whatever standard.

How it works

One of the main reasons why Bluetooth 2 is able to support a much higher data throughput is that it utilises a different modulation scheme for the payload data. However this is implemented in a manner in which compatibility with previous revisions of the Bluetooth standard is still retained.
Bluetooth data is transmitted as packets that are made up from a standard format. This consists of four elements which are: (a) The Access Code which is used by the receiving device to recognise the incoming transmission; (b) The Header which describes the packet type and its length; (c) The Payload which is the data that is required to be carried; and finally (d) The Inter-Packet Guard Band which is required between transmissions to ensure that transmissions from two sources do not collide, and to enable the receiver to re-tune.

In previous versions of the Bluetooth standard, all three elements of the transmission, i.e. Access Code, Header and Payload were transmitted using Gaussian Frequency Shift Keying (GFSK) where the carrier is shifted by +/- 160 kHz indicating a one or a zero, and in this way one bit is encoded per symbol.
The Bluetooth 2.0 specification uses a variety of forms of modulation. GFSK is still used for transmitting the Access Code and Header and in this way compatibility is maintained. However other forms of modulation can be used for the Payload. There are two additional forms of modulation that have been introduced. One of these is mandatory, while the other is optional.

A further small change is the addition of a small guard band between the Header and the payload. In addition to this a short synchronisation word is inserted at the beginning of the payload.

Mandatory modulation format

The first of the new modulation formats which must be included on any Bluetooth 2 device gives a two fold improvement in the data rate and thereby allows a maximum speed of 2 Mbps. This is achieved by using pi/4 differential quaternary phase shift keying (pi/4 DQPSK). This form of modulation is significantly different to the GFSK that was used on previous Bluetooth standards in that the new standard uses a form of phase modulation, whereas the previous ones used on frequency modulation.
Using quaternary phase shift modulation means that there are four possible phase positions for each symbol. Accordingly this means that two bits can be encoded per symbol, and this provides the two fold data increase over the frequency shift keying used for the previous versions of Bluetooth.

Higher speed modulation

To enable the full three fold increase in data rate to be achieved a further form of modulation is used. Eight phase differential phase shift keying (8DPSK) enables eight positions to be defined with 45 degrees between each of them. By using this form of modulation eight positions are possible and three bits can be encoded per symbol. This enables the data rate of 3 Mbps to be achieved.
As the separation between the different phase positions is much smaller than it was with the QPSK used to provide the two fold increase in speed, the noise immunity has been reduced in favour of the increased speed. Accordingly this optional form of modulation is only used when a link is sufficiently robust.

Packet formats

The Bluetooth 2 specification defines ten new packet formats for use with the higher data rate modulation schemes, five each for each of the enhanced data rate schemes. Three of these are for the 1, 3 and 5 slot asynchronous packets used for transferring data. The remaining two are used for 3 and 5 slot extended Synchronous Connection Orientated (eSCO) packets. These use bandwidth that is normally reserved for voice communications.
The new format for these packets does not incorporate FEC. If this is required then the system switches back automatically to the standard rate packets. However many of the links are over a very short range where the signal level is high and the link quality good.
It is necessary for the packet type to be identified so that the receiver can decode them correctly, knowing also the type of modulation being used. An identifier is therefore included in the header which is sent using GFSK. This packet header used for the previous version of Bluetooth only used 4 bits. This gave sufficient capability for the original system. However there was insufficient space for the additional information that needed to be sent for Bluetooth 2.
It was not possible to change the header format because backward compatibility would not be possible. Instead different link modes are defined. When two Bluetooth 2 or EDR devices communicate the messages are used in a slightly different way, indicating the Bluetooth 2 or EDR modes. In this way compatibility is retained while still being able to carry the required information.

Summary

Bluetooth 2 / EDR is a significant improvement to Bluetooth and will enable it to retain its position in the market place. Its introduction, as the Bluetooth has become more widely accepted and used will enable it to build on its position within the market place.

Bluetooth technology

This Bluetooth tutorial is split into several pages each of which addresses different aspects of Bluetooth operation and technology:
    [1] Bluetooth overview     [2] Bluetooth EDR
Bluetooth has now established itself in the market place enabling a variety of devices to be connected together using wireless technology. Bluetooth technology has come into its own connecting remote headsets to mobile phones, but it is also used in a huge number of other applications as well.
In fact Bluetooth technology is now an integral part of many household items. Cell phones and many other devices use Bluetooth for short range connectivity. In this sort of application, Bluetooth has been a significant success.

Bluetooth beginnings ...

Bluetooth technology originated in 1994 when Ericsson came up with a concept to use a wireless connection to connect items such as an earphone and a cordless headset and the mobile phone. The idea behind Bluetooth (it was not yet called Bluetooth) was developed further as the possibilities of interconnections with a variety of other peripherals such as computers printers, phones and more were realised. Using this technology, the possibility of quick and easy connections between electronic devices should be possible.
It was decided that in order to enable the technology to move forward and be accepted as an industry standard that it needed to be opened up as an industry standard. Accordingly, in Feb 1998, five companies (Ericsson, Nokia, IBM, Toshiba and Intel) formed a Special Interest Group (SIG). Three months later in May 1998, Bluetooth was publicly announced with the first specification following on with the first release of the standard in July 1999. Later more members were added to the group with four new companies, Motorola, Microsoft, Lucent and 3Com, joining the group. Since then more companies have joined and the specification has grown and is now used in a large variety of products.

The name

The name of the Bluetooth standard originates from the Danish king Harald Bl�tand who was king of Denmark between 940 and 981 AD. His name translates as "Blue Tooth" and this was used as his nickname. A brave warrior, his main achievement was that of uniting Denmark under the banner of Christianity, and then uniting it with Norway that he had conquered. The Bluetooth standard was named after him because Bluetooth endeavours to unite personal computing and telecommunications devices.

Bluetooth basics

Bluetooth is a wireless data system and can carry data at speeds up to 721 Kbps in its basic form and in addition to this it offers up to three voice channels. Bluetooth technology enables a user to replace cables between devices such as printers, fax machines, desktop computers and peripherals, and a host of other digital devices. Furthermore, it can provide a connection between an ad hoc wireless network and existing wired data networks.
The technology is intended to be placed in a low cost module that can be easily incorporated into electronics devices of all sorts. Bluetooth uses the licence free Industrial, Scientific and Medical (ISM) frequency band for its radio signals and enables communications to be established between devices up to a maximum distance of 100 metres.

RF system

Running in the 2.4 GHz ISM band, Bluetooth employs frequency hopping techniques with the carrier modulated using Gaussian Frequency Shift Keying (GFSK).
With many other users on the ISM band from microwave ovens to Wi-Fi, the hopping carrier enables interference to be avoided by Bluetooth devices. A Bluetooth transmission only remains on a given frequency for a short time, and if any interference is present the data will be re-sent later when the signal has changed to a different channel which is likely to be clear of other interfering signals. The standard uses a hopping rate of 1600 hops per second. These are spread over 79 fixed frequencies and they are chosen in a pseudo-random sequence. The fixed frequencies occur at 2400 + n MHz where the value of n varies from 1 to 79. This gives frequencies of 2402, 2404 �.. 2480 MHz. In some countries the ISM band allocation does not allow the full range of frequencies to be used. In France, Japan and Spain, the hop sequence has to be restricted to only 23 frequencies because of the ISM band allocation is smaller.
During the development of the Bluetooth standard it was decided to adopt the use of frequency hopping system rather than a direct sequence spread spectrum approach because it is able to operate over a greater dynamic range. If direct sequence spread spectrum techniques were used then other transmitters nearer to the receiver would block the required transmission if it is further away and weaker.

Modulation

The way in which the data is modulated onto the Bluetooth carrier was also carefully chosen. A form of frequency shift keying known as Gaussian Frequency Shift Keying is employed. Here the frequency of the carrier is shifted to carry the modulation. A binary one is represented by a positive frequency deviation and a binary zero is represented by a negative frequency deviation. It is then filtered using a filter with a Gaussian response curve to ensure the sidebands do not extend too far either side of the main carrier. By doing this it achieves a bandwidth of 1 MHz with stringent filter requirements to prevent interference on other channels. For correct operation the level of BT is set to 0.5 and the modulation index must be between 0.28 and 0.35.

Transmitter power levels

The transmitter powers for Bluetooth are quite low, although there are three different classes of output dependent upon the anticipated use and the range required. Power Class 1 is designed for long range communications up to about 100m devices, and this has a maximum output power of 20 dBm, Next is Power Class 2 which is used for what are termed for ordinary range devices with a range up to about 10m, with a maximum output power of 4 dBm. Finally there is Power Class 3 for short range devices. This support communication only to about 10cm and it has a maximum output power of 0 dBm.
There are also some frequency accuracy requirements for Bluetooth transmissions. The transmitted initial centre frequency must be within �75 kHz from the receiver centre frequency. The initial frequency accuracy is defined as being the frequency accuracy before any information is transmitted and as such any frequency drift requirement is not included.
In order to enable effective communications to take place in an environment where a number of devices may receive the signal, each device has its own identifier. This is provided by having a 48 bit hard wired address identity giving a total of 2.815 x 10^14 unique identifiers.

Data transfer

There are two ways in which data is transferred. The first is by using what is termed an Asynchronous Connectionless Communications Link (ACL). This is used for file and data transfers. A second method is termed a Synchronous Connection-orientated Communications Link (SCL). This is used for applications such as digital audio.
The ACL is enables data to be transferred via Bluetooth at speeds up to the maximum rate of 732.2 kbits/sec. This occurs when it is operating in an asymmetric mode. This is commonly used because for most applications there is far more data transferred in one direction than the other. When a symmetrical mode is needed with data transferred at the same rate in both directions, the data transfer rate falls to 433.9 kbits/sec. The synchronous links support two bi-directional connections at a rate of 64 kbits/sec. The data rates are adequate for audio and most file transfers. However the available data rate is insufficient for applications such as high rate DVDs that require 9.8 Mbit/sec or for many other video applications including games spectacles.
Data is organised into packets to be sent across a Bluetooth link. The Bluetooth specification lists seventeen different formats that can be used dependent upon the requirements. They have options for elements such as forward error correction data and the like. However the standard packet consists of a 72 bit access code filed, a 54 bit header field, and then the data to be transmitted which may be between 0 and 2745 bits. This data includes the 16 bit CRC if it is needed.
As it is likely that interference will cause errors, error handling is incorporated within the system. For asynchronous links packet sequence numbers are transmitted. If an error is detected in a packet then the receiver can request it to be re-sent. Error coding using a 16 bit CRC is also available. For the synchronous links packets cannot be re-sent as there is unlikely to be sufficient bandwidth available to re-send data and "catch up". However it is possible to include some forward error control.

Communication nets

In order that Bluetooth devices may communicate with each other they form small clusters or nets. These are termed "piconets" and comprise up to eight devices. Within a piconet, one of the devices assumes the role of "Master" while the others become slaves.
If more than eight devices are available to join the net, eight are allowed in, and the others need to remain in an inactive standby state. They may be requested to join the net at a later state if required.
To enable a net to eb set up, the master transmits an enquiry message every 1.28 seconds to discover whether there are any other devices within range. If a reply is received then an invitation to join the net is transmitted to the specific device that has responded. After this the master allocates each device a member address and then controls all the transmissions.
All Bluetooth devices have a clock that runs at twice the hopping speed and this provides synchronisation to the whole net. The master transmits in the even numbered time slots whilst the slaves transmit in the odd numbered slots once they have been given permission to transmit.
As security is becoming an important issue, especially where links to computers are concerned, secure communications are possible over Bluetooth with the devices encrypting the data transmitted. A key up to 128 bits is used and it is claimed that the level of security provided is sufficient for financial transactions. However in some countries the length of the key is limited to enable the security agencies to gain access if required.

Summary

Bluetooth is well established, but despite this further enhancements are being introduced. Faster data transfer rates, and greater flexibility. In addition to this efforts have been made to ensure that interoperation has been improved so that devices from different manufacturers can talk together more easily.

What is DVB-T?

DVB-T or Digital Video Broadcast - Terrestrial is the most widely used digital television standard in use around the globe for terrestrial television transmissions. It provides many facilities and enables a far more efficient use of the available radio frequency spectrum than the previous analogue transmissions.
The DVB-T standard was first published in 1997 and since then it has become the most widely used format for broadcast digital in the world. By 2008, it was the standard that was adopted in more than 35 countries and over 60 million receivers deployed and in use.

Major milestones in DVB-T development and deployment

  • December 1994:   MPEG-2 ISO 13818-1 systems definition available
  • July 1995:   demonstrations by BBC of digital terrestrial broadcasting to several UK government officials.
  • January 1996:   The 4:2:2 video format standardised
  • February 1996:   QAM-COFDM transmission system agreed for DVB-T
  • 9th April 1996:   Implementation of the first phases of a digital terrestrial television pilot-service from Crystal Palace and Pontop Pike by BBC in the UK
  • 24th December 1996:   U.S. Government adopts DTV as first step towards a U.S. digital terrestrial network
  • March 1997:   First publication of the DVB-T stadnard
  • December 1997:   Over 200 DVB Satellite TV channels live using DVB-T
  • November 1998:   Transmission of DVB-T starts in the UK

DVB-T basics

DVB-T makes use of many modern technologies to enable it to deliver high quality video in a broadcast environment.
The DVB-T transmission is capable of carrying a very significant level of data. Normally several television broadcasts may be carried on a single transmission and in addition to this several audio channels may be carried as well. As a result each transmission is called a multiplex.
One of the key elements of the radio or air interface is the choice of the modulation scheme for DVB-T. In line with many other forms of transmission these days, DVB-T uses OFDM, Orthogonal Frequency Division Multiplex.

Note on OFDM:

Orthogonal Frequency Division Multiplex (OFDM) is a form of transmission that uses a large number of close spaced carriers that are modulated with low rate data. Normally these signals would be expected to interfere with each other, but by making the signals orthogonal to each another there is no mutual interference. This is achieved by having the carrier spacing equal to the reciprocal of the symbol period. This means that when the signals are demodulated they will have a whole number of cycles in the symbol period and their contribution will sum to zero - in other words there is no interference contribution. The data to be transmitted is split across all the carriers and this means that by using error correction techniques, if some of the carriers are lost due to multi-path effects, then the data can be reconstructed. Additionally having data carried at a low rate across all the carriers means that the effects of reflections and inter-symbol interference can be overcome. It also means that single frequency networks, where all transmitters can transmit on the same channel can be implemented.
Click on the link for an OFDM tutorial

In order that the DVB-T network is able to meet the requirements of the operator, it is possible to vary a number of the characteristics:
  • 3 modulation options (QPSK, 16QAM, 64QAM):   There is a balance between the amount rate at which data can be transmitted and the signal to noise ratio that can be tolerated. The lower order modulation formats like QPSK do not transmit data as fast as the higher modulation formats such as 64QAM, but they can be received when signal strengths are lower.
  • 5 different FEC (forward error correction) rates:   Any radio system transmitting data will suffer errors. In order to correct these errors various forms of error correction are used. The rate at which this is done affects the rate at which the data can be transmitted. The higher the level of error correction that is applied, the greater the level of supporting error correction data that needs to be transmitted. In turn this reduces the data rate of the transmission. Accordingly it is necessary to match the forward error correction level to the requirements of the broadcast network. The error correction uses convolutional coding and Reed Solomon with rates of 1/2, 2/3, 3/4, 5/6, and 7/8 dependent upon the requirements.
  • 4 Guard Interval options:  
  • 2k or 8k carriers:   According to the transmission requirements the number of carriers within the OFDM signal can be varied. When fewer carriers are used, each carrier must carry a higher bandwidth for the same overall multiplex data rate. This has an impact on the resilience to reflections and the spacing between transmitters in a single frequency network. Although the systems are labelled 2k and 8k the actual numbers of carriers used are 1705 carriers for the 2k service and 6817 carriers for the 8k service.
  • 6, 7 or 8MHz channel bandwidths:   It is possible to tailor the bandwidth of the transmission to the bandwidth available and the channel separations. Three figures of bandwidth are available.
  • Video at 50Hz or 60Hz:   The refresh rate for the a screen can be varied. Traditionally for analogue televisions this was linked to the frequency used for the local mains supplies.
By altering the various parameters of the transmission it is possible for network operators to find the right balance between the robustness of the DVB-T transmission and its capacity.

DVB-T single frequency network

One of the advantages of using OFDM as the form of modulation is that it allows the network to implement what is termed a single frequency network. A single frequency network, or SFN is one where a number of transmitters operate on the same frequency without causing interference.
many forms of transmission, including the old analogue television broadcasts would interfere with one another. Therefore when planning a network, adjacent areas could not use the same channels and this greatly increased the amount of spectrum required to cover a country. By using OFDM an SFN can be implemented and this provides a significant degree of spectrum efficiency improvement.
A further advantage of using a system such as DVB-T that uses OFDM and allows the implementation of an SFN is that very small transmitters can be used to enhance local coverage. Small "gap fillers" may even be used to enhance indoor coverage for DVB-T.

DVB-T hierarchical modulation

Another facility that is allowed by DVB-T is known as Hierarchical Modulation. Using this technique, two completely separate data streams can be modulated onto a single DVB-T signal. A "High Priority" or HP stream is embedded within a "Low Priority" or LP stream. Using this principle DVB-T broadcasters are able to target two different types of receiver with two completely different services.
One example where this could be used is for a DVB-H mobile TV service optimised for more difficult reception conditions could be placed in the HP stream, with HDTV DVB-T services targeted to fixed antennas delivered in the LP stream.

DVB-T specification highlights

Parameter DVB-T
Number of carriers in signal 2k, 8k
Modulation formats QPSK, 16QAM, 64 QAM
Scattered pilots 8% of total
Continual pilots 2.6% of total
Error correction Convolutional Coding + Reed Solomon
1/2, 2/3, 3/4, 5/6, 7/8
Guard interval 1/4, 1/8, 1/16, 1/32

DVB-T summary

DVB-T is now well established. Many countries, including the UK are moving towards a complete switch-over from analogue to digital, with a resultant digital dividend releasing a significant amount of bandwidth for other services. However as DVB-T has now been in use for ten years a new standard, which is a development of the original DVB-T standard known as DVB-T2 is being developed. This would have backwards compatibility, but allow additional services and flexibility as well as a number of features to future-proof it.

Erlang and Erlang B Tutorial

The Erlang is widely used in telecommunications technology. The Erlang is a statistical measure of the voice traffic density in a telecommunications system and it is widely used because, for any element in a telecommunications system, whether it is a landline, or uses cellular technology, it is necessary to be able to understand the traffic volume. As a result it is helps to have a definition of the telecommunications traffic so that the volume can be quantified in a standard way and calculations can be made. Telecommunications network designers make great use of the Erlang to understand traffic patterns within a voice network and they use the figures to determine the capacity that is required in any area of the network.

Who was Erlang?

The Erlang is named after a Danish telephone engineer named A.K Erlang (Agner Krarup Erlang). He was born on 1st January 1878 and although he trained as a mathematician, he was the first person to investigate traffic and queuing theory in telephone circuits.
After receiving his MA, Erlang worked in a number of schools. However, Erlang was a member of the Danish Mathematician's Association (TBMI) and it was through this organization that Erlang met the Chief Engineer of the Copenhagen Telephone Company (CTC) and as a result, he went to work for them from 1908 for almost 20 years.
While he was at CTC, Erlang studied the loading on telephone circuits, looking at how many lines were required to provide an acceptable service without installing too much over-capacity that would cost the company money. There was a trade-off between cost and service level.
Erlang developed his theories over a number of years, and published several papers. He expressed his findings in mathematical forms so that they could be used to calculate the required level of capacity, and today the same basic equations are in widespread use..
In view of his groundbreaking work, the International Consultative Committee on Telephones and Telegraphs (CCITT) honoured him in 1946 by adopting the name "Erlang" for the basic unit of telephone traffic.
Erlang died on 3rd February 1929 after an unsuccessful abdominal operation.

Erlang basics

The Erlang is the basic unit of telecommunications traffic intensity representing continuous use of one circuit and it is given the symbol "E". It is effectively call intensity in call minutes per sixty minutes. In general the period of an hour is used, but it actually a dimensionless unit because the dimensions cancel out (i.e. minutes per minute).
The number of Erlangs is easy to deduce in a simple case. If a resource carries one Erlang, then this is equivalent to one continuous call over the period of an hour. Alternatively if two calls were in progress for fifty percent of the time, then this would also equal one Erlang (1E). Alternatively if a radio channel is used for fifty percent of the time carries a traffic level of half an Erlang (0.5E)
From this it can be seen that an Erlang, E, may be thought of as a use multiplier where 100% use is 1E, 200% is 2E, 50% use is 0.5E and so forth.
Interestingly for many years, AT&T and Bell Canada measured traffic in another unit called CCS, 100 call seconds. If figures in CCS are encountered then it is a simple conversion to change CCS to Erlangs. Simply divide the figure in CCS by 36 to obtain the figure in Erlangs

Erlang function or Erlang formula and symbol

It is possible to express the way in which the number of Erlangs are required in the format of a simple function or formula.
A     =     λ     x     h
Where:
λ = the mean arrival rate of new calls
h = the mean call length or holding time
A = the traffic in Erlangs.
Using this simple Erlang function or Erlang formula, the traffic can easily be calculated.

Erlang-B and Erlang-C

Erlang calculations are further broken down as follows:
  • Erlang B:   The Erlang B is used to work out how many lines are required from a knowledge of the traffic figure during the busiest hour. The Erlang B figure assumes that any blocked calls are cleared immediately. This is the most commonly used figure to be used in any telecommunications capacity calculations.
  • Extended Erlang B:   The Extended Erlang B is similar to Erlang B, but it can be used to factor in the number of calls that are blocked and immediately tried again.
  • Erlang C:   The Erlang C model assumes that not all calls may be handled immediately and some calls are queued until they can be handled.
These different models are described in further detail below.

Erlang B

It is particularly important to understand the traffic volumes at peak times of the day. Telecommunications traffic, like many other commodities, varies over the course of the day, and also the week. It is therefore necessary to understand the telecommunications traffic at the peak times of the day and to be able to determine the acceptable level of service required. The Erlang B figure is designed to handle the peak or busy periods and to determine the level of service required in these periods.

Erlang C

The Erlang C model is used by call centres to determine how many staff or call stations are needed, based on the number of calls per hour, the average duration of call and the length of time calls are left in the queue. The Erlang C figure is somewhat more difficult to determine because there are more interdependent variables. The Erlang C figure, is nevertheless very important to determine if a call centre is to be set up, as callers do not like being kept waiting interminably, as so often happens.

Erlang summary

The Erlang formulas and the concepts put forward by Erlang are still an essential part of telecommunications network planning these days. As a result, telecommunications engineers should have a good understanding of the Erlang and the associated formulae.
despite the widespread use of the Erlang concepts and formulae, it is necessary to remember that there are limitations to their use. It is necessary to remember that the Erlang formulas make assumptions. Erlang B assumes that callers who receive a busy tone will not immediately try again. Also Erlang C assumes that callers will not hold on indefinitely. It is also worth remembering that the Erlang formulas are based on statistics, and that to make these come true an infinite number of sources is required. However for most cases a total of ten sources gives an adequate number of sources to give sufficiently accurate results.
The Erlang is a particularly important element of telecommunications theory, and it is a cornerstone of many areas of telecommunications technology today. However one must be aware of its limitations and apply the findings of any work using Erlangs, the Erlang B and Erlang C formulas or functions with a certain amount of practical knowledge.

Satellite communications basics

Satellites are able fulfill a number of roles. One of the major roles is for satellite communications. Here the satellite enables communications to be established over large distances - well beyond the line of sight. Communications satellites may be used for many applications including relaying telephone calls, providing communications to remote areas of the Earth, providing satellite communications to ships, aircraft and other mobile vehicles, and there are many more ways in which communications satellites can be used.

Satellite communications basics

When used for communications, a satellite acts as a repeater. Its height above the Earth means that signals can be transmitted over distances that are very much greater than the line of sight. An earth station transmits the signal up to the satellite. This is called the up-link and is transmitted on one frequency. The satellite receives the signal and retransmits it on what is termed the down link which is on another frequency.

Long distance satellite communications basics
Using a satellite for long distance communications

The circuitry in the satellite that acts as the receiver, frequency changer, and transmitter is called a transponder. This basically consists of a low noise amplifier, a frequency changer consisting a mixer and local oscillator, and then a high power amplifier. The filter on the input is used to make sure that any out of band signals such as the transponder output are reduced to acceptable levels so that the amplifier is not overloaded. Similarly the output from the amplifiers is filtered to make sure that spurious signals are reduced to acceptable levels. Figures used in here are the same as those mentioned earlier, and are only given as an example. The signal is received and amplified to a suitable level. It is then applied to the mixer to change the frequency in the same way that occurs in a superheterodyne radio receiver. As a result the communications satellite receives in one band of frequencies and transmits in another.
In view of the fact that the receiver and transmitter are operating at the same time and in close proximity, care has to be taken in the design of the satellite that the transmitter does not interfere with the receiver. This might result from spurious signals arising from the transmitter, or the receiver may become de-sensitised by the strong signal being received from the transmitter. The filters already mentioned are used to reduce these effects.

 Basic satellite transponder
Block diagram of a basic satellite transponder

Signals transmitted to satellites usually consist of a large number of signals multiplexed onto a main transmission. In this way one transmission from the ground can carry a large number of telephone circuits or even a number of television signals. This approach is operationally far more effective than having a large number of individual transmitters.
Obviously one satellite will be unable to carry all the traffic across the Atlantic. Further capacity can be achieved using several satellites on different bands, or by physically separating them apart from one another. In this way the beamwidth of the antenna can be used to distinguish between different satellites. Normally antennas with very high gains are used, and these have very narrow beamwidths, allowing satellites to be separated by just a few degrees.

Separating satellites by position
Separating satellites by position

Telecommunications satellite links

Communications satellites are ideally placed to provide telecommunications links between different places across the globe. Traditional telecommunications links used direct "cables" linking different areas. As a result of the cost of installation and maintenance of these cables, satellites were seen as an ideal alternative. While still expensive to put in place, they provided a high bandwidth and were able to operate for many years.
In recent years the bandwidth that can be offered by cables has increased considerably, and this has negated some of the gains of satellites. Additionally the geostationary satellites used for telecommunications links introduce a significant time delay in view of the very large distances involved. This can be a problem for normal telephone calls.

Mobile satellite communications systems

There are many instances where communications need to be maintained over wide areas of the globe. Ships, aircraft and the like, need to be able to communicate from points all around the world. Traditionally HF radio communications ahs been used, but this is unreliable. Satellite communications provide an ideal solution to this problem as satellite communications are much more reliable and they are able to provide interference free stable communications links. As a result, Satellite communications is now fitted as standard to all maritime vessels, and it is becoming increasingly used by aircraft, although it is not yet adopted for Air Traffic management (ATM).
In addition to these users, these services can be sued by many land mobile or land portable radio users. Satellite terminals provide are able to access the satellite and the users is able to achieve communications from almost anywhere on the globe. As these communications satellites are in geostationary orbits, communications is not possible towards the poles as in these regions it is not possible to see the satellites.

Direct broadcast communications satellites

Another variant of communications satellites is those used for direct broadcasting. This form of broadcasting has become very popular as it provides very high levels of bandwidth because of the high frequencies used. This means that large numbers of channels can be carried. It also enables large areas of the globe to be covered by one delivery system. For terrestrial broadcasting a large number of high power transmitters are required that are located around the country. Even then coverage may not be good in outlying areas.
These DBS satellites are very similar to ordinary communications satellites in concept. Naturally they require high levels of transmitted power because domestic users do not want very large antennas on their houses to be able to receive the signals. This means that very large arrays of solar cells are required along with large batteries to support the broadcasting in periods of darkness. They also have a number of antenna systems accurately directing the transmitted power to the required areas. Different antennas on the same satellite may have totally different footprints.

Satellite phone systems

Satellites have also been used for cellular style communications. They have not been nearly as successful as initially anticipated because of the enormously rapid growth of terrestrial cellular telecommunications, and its spread into far more countries and areas than predicted when the ideas for satellite personal communications was originally envisaged. Nevertheless these satellite phone systems are now well established and have established a specific market. Accordingly these satellite phone systems are now widely available for mobile communications over wide areas of the globe.
The satellite phone systems that are available have varying degrees of coverage. Some provide true global coverage, although others are restricted to the more densely populated areas of the globe.
The systems that were set up used low earth orbiting satellites, typically with a constellation of around 66 satellites. Handheld phones then communicated directly with the satellites which would then process and relay the signals as required.
Other satellite phone systems use a number of geostationary satellites, although these satellite phone systems generally require the use of a directional antenna in view of the larger distances that need to be covered to and from the satellite. Additionally the levels of latency are higher (i.e. time delay for the signal to travel to and from the satellite) in view of the much higher orbit required. However as the satellites are geostationary, satellite or beam handover is less of a problem.
The main advantage of the satellite system is that it is truly global and communications can be made from ships, in remote locations where there would be no possibility of there being a communications network. However against this the network is expensive to run because of the cost of building and maintaining the satellite network, as well as the more sophisticated and higher power handsets required to operate with the satellite. As a result calls are more expensive than those made over terrestrial mobile phone networks.

Satellite communications summary

Although the basics of satellite communications are fairly straightforward, there is a huge investment required in building the satellite and launching it into orbit. Nevertheless many communications satellites exist in orbit around the globe and they are widely used for a variety of applications from providing satellite telecommunications links to direct broadcasting and the use of satellite phone and individual satellite communication links.

CDMA Overview


ACCESS SCHEMES

For radio systems there are two resources, frequency and time. Division by frequency, so that each pair of communicators is allocated part of the spectrum for all of the time, results in Frequency Division Multiple Access (FDMA). Division by time, so that each pair of communicators is allocated all (or at least a large part) of the spectrum for part of the time results in Time Division Multiple Access (TDMA). In Code Division Multiple Access (CDMA), every communicator will be allocated the entire spectrum all of the time. CDMA uses codes to identify connections.

forest
Multiple Access Schemes


CODING

CDMA uses unique spreading codes to spread the baseband data before transmission. The signal is transmitted in a channel, which is below noise level. The receiver then uses a correlator to despread the wanted signal, which is passed through a narrow bandpass filter. Unwanted signals will not be despread and will not pass through the filter. Codes take the form of a carefully designed one/zero sequence produced at a much higher rate than that of the baseband data. The rate of a spreading code is referred to as chip rate rather than bit rate.
See coding process page for more details.


forest
CDMA spreading


CODES

CDMA codes are not required to provide call security, but create a uniqueness to enable call identification. Codes should not correlate to other codes or time shifted version of itself. Spreading codes are noise like pseudo-random codes, channel codes are designed for maximum separation from each other and cell identification codes are balanced not to correlate to other codes of itself.
See codes page for more details.

forest
Example OVSF codes, used in channel coding


THE SPREADING PROCESS

WCDMA uses Direct Sequence spreading, where spreading process is done by directly combining the baseband information to high chip rate binary code. The Spreading Factor is the ratio of the chips (UMTS = 3.84Mchips/s) to baseband information rate. Spreading factors vary from 4 to 512 in FDD UMTS. Spreading process gain can in expressed in dBs (Spreading factor 128 = 21dB gain).
See spreading page for more details.

forest
CDMA spreading



POWER CONTROL

CDMA is interference limited multiple access system. Because all users transmit on the same frequency, internal interference generated by the system is the most significant factor in determining system capacity and call quality. The transmit power for each user must be reduced to limit interference, however, the power should be enough to maintain the required Eb/No (signal to noise ratio) for a satisfactory call quality. Maximum capacity is achieved when Eb/No of every user is at the minimum level needed for the acceptable channel performance. As the MS moves around, the RF environment continuously changes due to fast and slow fading, external interference, shadowing , and other factors. The aim of the dynamic power control is to limit transmitted power on both the links while maintaining link quality under all conditions. Additional advantages are longer mobile battery life and longer life span of BTS power amplifiers
See UMTS power control page for more details.


HANDOVER

Handover occurs when a call has to be passed from one cell to another as the user moves between cells. In a traditional "hard" handover, the connection to the current cell is broken, and then the connection to the new cell is made. This is known as a "break-before-make" handover. Since all cells in CDMA use the same frequency, it is possible to make the connection to the new cell before leaving the current cell. This is known as a "make-before-break" or "soft" handover. Soft handovers require less power, which reduces interference and increases capacity. Mobile can be connected to more that two BTS the handover. "Softer" handover is a special case of soft handover where the radio links that are added and removed belong to the same Node B.
See Handover page for more details.

forest
CDMA soft handover

MULTIPATH AND RAKE RECEIVERS

One of the main advantages of CDMA systems is the capability of using signals that arrive in the receivers with different time delays. This phenomenon is called multipath. FDMA and TDMA, which are narrow band systems, cannot discriminate between the multipath arrivals, and resort to equalization to mitigate the negative effects of multipath. Due to its wide bandwidth and rake receivers, CDMA uses the multipath signals and combines them to make an even stronger signal at the receivers. CDMA subscriber units use rake receivers. This is essentially a set of several receivers. One of the receivers (fingers) constantly searches for different multipaths and feeds the information to the other three fingers. Each finger then demodulates the signal corresponding to a strong multipath. The results are then combined together to make the signal stronger.

UMTS Overview


3G Systems

3G Systems are intended to provide a global mobility with wide range of services including telephony, paging, messaging, Internet and broadband data. International Telecommunication Union (ITU) started the process of defining the standard for third generation systems, referred to as International Mobile Telecommunications 2000 (IMT-2000). In Europe European Telecommunications Standards Institute (ETSI) was responsible of UMTS standardisation process. In 1998 Third Generation Partnership Project (3GPP) was formed to continue the technical specification work. 3GPP has five main UMTS standardisation areas: Radio Access Network, Core Network, Terminals, Services and System Aspects and GERAN.

3GPP Radio Access group is responsible of:




  • Radio Layer 1, 2 and 3 RR specification
  • Iub, Iur and Iu Interfaces
  • UTRAN Operation and Maintenance requirements
  • BTS radio performance specification
  • Conformance test specification for testing of radio aspects of base stations
  • Specifications for radio performance aspects from the system point of view

    3GPP Core Network group is responsible of:

  • Mobility management, call connection control signalling between the user equipment and the core network.
  • Core network signalling between the core network nodes.
  • Definition of interworking functions between the core network and external networks.
  • Packet related issues.
  • Core network aspects of the lu interface and Operation and Maintenance requirements

    3GPP Terminal group is responsible of:

  • Service capability protocols
  • Messaging
  • Services end-to-end interworking
  • USIM to Mobile Terminal interface
  • Model/framework for terminal interfaces and services (application) execution
  • Conformance test specifications of terminals, including radio aspects

    3GPP Services and System Aspects group is responsible of:

  • Definition of services and feature requirements.
  • Development of service capabilities and service architecture for cellular, fixed and cordless applications.
  • Charging and Accounting
  • Network Management and Security Aspects
  • Definition, evolution, and maintenance of overall architecture.


    Third Generation Partnership Project 2 (3GPP) was formed for technical development of cdma2000 technology which is a member of IMT-2000 family.

    In February 1992 World Radio Conference allocated frequencies for UMTS use. Frequencies 1885 - 2025 and 2110 - 2200 MHz were identified for IMT-2000 use. See the UMTS Frequency page for more details. All 3G standards are still under constant development. In 1999 ETSI Standardisation finished for UMTS Phase 1 (Release '99, version 3) and next release is due December 2001. UMTS History page has a list of all major 3G and UMTS milestones. Most of the European countries and some countries round the world have already issued UMTS licenses either by beauty contest or auctions.


    UMTS Services

    UMTS offers teleservices (like speech or SMS) and bearer services, which provide the capability for information transfer between access points. It is possible to negotiate and renegotiate the characteristics of a bearer service at session or connection establishment and during ongoing session or connection. Both connection oriented and connectionless services are offered for Point-to-Point and Point-to-Multipoint communication.

    Bearer services have different QoS parameters for maximum transfer delay, delay variation and bit error rate. Offered data rate targets are:

  • 144 kbits/s satellite and rural outdoor
  • 384 kbits/s urban outdoor
  • 2048 kbits/s indoor and low range outdoor

    UMTS network services have different QoS classes for four types of traffic:

  • Conversational class (voice, video telephony, video gaming)
  • Streaming class (multimedia, video on demand, webcast)
  • Interactive class (web browsing, network gaming, database access)
  • Background class (email, SMS, downloading)

    UMTS will also have a Virtual Home Environment (VHE). It is a concept for personal service environment portability across network boundaries and between terminals. Personal service environment means that users are consistently presented with the same personalised features, User Interface customisation and services in whatever network or terminal, wherever the user may be located. UMTS also has improved network security and location based services.


    UMTS Architecture

    A UMTS network consist of three interacting domains; Core Network (CN), UMTS Terrestrial Radio Access Network (UTRAN) and User Equipment (UE). The main function of the core network is to provide switching, routing and transit for user traffic. Core network also contains the databases and network management functions.

    The basic Core Network architecture for UMTS is based on GSM network with GPRS. All equipment has to be modified for UMTS operation and services. The UTRAN provides the air interface access method for User Equipment. Base Station is referred as Node-B and control equipment for Node-B's is called Radio Network Controller (RNC). UMTS system page has an example, how UMTS network could be build.

    It is necessary for a network to know the approximate location in order to be able to page user equipment. Here is the list of system areas from largest to smallest.

  • UMTS systems (including satellite)
  • Public Land Mobile Network (PLMN)
  • MSC/VLR or SGSN
  • Location Area
  • Routing Area (PS domain)
  • UTRAN Registration Area (PS domain)
  • Cell
  • Sub cell


    Core Network

    The Core Network is divided in circuit switched and packet switched domains. Some of the circuit switched elements are Mobile services Switching Centre (MSC), Visitor location register (VLR) and Gateway MSC. Packet switched elements are Serving GPRS Support Node (SGSN) and Gateway GPRS Support Node (GGSN). Some network elements, like EIR, HLR, VLR and AUC are shared by both domains.

    The Asynchronous Transfer Mode (ATM) is defined for UMTS core transmission. ATM Adaptation Layer type 2 (AAL2) handles circuit switched connection and packet connection protocol AAL5 is designed for data delivery.

    The architecture of the Core Network may change when new services and features are introduced. Number Portability DataBase (NPDB) will be used to enable user to change the network while keeping their old phone number. Gateway Location Register (GLR) may be used to optimise the subscriber handling between network boundaries. MSC, VLR and SGSN can merge to become a UMTS MSC.


    Radio Access

    Wide band CDMA technology was selected to for UTRAN air interface. UMTS WCDMA is a Direct Sequence CDMA system where user data is multiplied with quasi-random bits derived from WCDMA Spreading codes. In UMTS, in addition to channelisation, Codes are used for synchronisation and scrambling. WCDMA has two basic modes of operation: Frequency Division Duplex (FDD) and Time Division Duplex (TDD). UTRAN interfaces are shown on UMTS Network page.

    The functions of Node-B are:

  • Air interface Transmission / Reception
  • Modulation / Demodulation
  • CDMA Physical Channel coding
  • Micro Diversity
  • Error Handing
  • Closed loop power control

    The functions of RNC are:

  • Radio Resource Control
  • Admission Control
  • Channel Allocation
  • Power Control Settings
  • Handover Control
  • Macro Diversity
  • Ciphering
  • Segmentation / Reassembly
  • Broadcast Signalling
  • Open Loop Power Control


    User Equipment

    The UMTS standard does not restrict the functionality of the User Equipment in any way. Terminals work as an air interface counter part for Node-B and have many different types of identities. Most of these UMTS identity types are taken directly from GSM specifications.

  • International Mobile Subscriber Identity (IMSI)
  • Temporary Mobile Subscriber Identity (TMSI)
  • Packet Temporary Mobile Subscriber Identity (P-TMSI)
  • Temporary Logical Link Identity (TLLI)
  • Mobile station ISDN (MSISDN)
  • International Mobile Station Equipment Identity (IMEI)
  • International Mobile Station Equipment Identity and Software Number (IMEISV)

    UMTS mobile station can operate in one of three modes of operation:

  • PS/CS mode of operation: The MS is attached to both the PS domain and CS domain, and the MS is capable of simultaneously operating PS services and CS services.
  • PS mode of operation: The MS is attached to the PS domain only and may only operate services of the PS domain. However, this does not prevent CS-like services to be offered over the PS domain (like VoIP).
  • CS mode of operation: The MS is attached to the CS domain only and may only operate services of the CS domain.

    UMTS IC card has same physical characteristics as GSM SIM card. It has several functions:

  • Support of one User Service Identity Module (USIM) application (optionally more that one)
  • Support of one or more user profile on the USIM
  • Update USIM specific information over the air
  • Security functions
  • User authentication
  • Optional inclusion of payment methods
  • Optional secure downloading of new applications
  • Wednesday, March 3, 2010

    Cable Modems

    Definition
     
    Cable modems are devices that allow high-speed access to the Internet via a cable television network. While similar in some respects to a traditional analog modem, a cable modem is significantly more powerful, capable of delivering data approximately 500 times faster.

    Overview 

    This tutorial explores the high-speed access capability of cable modem technology in detail, with emphasis on mode of operation, network architecture, alternative technologies, and security issues.
    How Cable Modems Work
    Current Internet access via a 28.8–, 33.6–, or 56–kbps modem is referred to as voiceband modem technology. Like voiceband modems, cable modems modulate and demodulate data signals. However, cable modems incorporate more functionality suitable for today's high-speed Internet services. In a cable network, data from the network to the user is referred to as downstream, whereas data from the user to the network is referred to as upstream. From a user perspective, a cable modem is a 64/256 QAM RF receiver capable of delivering up to 30 to 40 Mbps of data in one 6-MHz cable channel. This is approximately 500 times faster than a 56–kbps modem. Data from a user to the network is sent in a flexible and programmable system under control of the headend. The data is modulated using a QPSK/16 QAM transmitter with data rates from 320 kbps up to 10 Mbps. The upstream and downstream data rates may be flexibly configured using cable modems to match subscriber needs. For instance, a business service can be programmed to receive as well as transmit higher bandwidth. A residential user, however, may be configured to receive higher bandwidth access to the Internet while limited to low bandwidth transmission to the network.
    A subscriber can continue to receive cable television service while simultaneously receiving data on cable modems to be delivered to a personal computer (PC) with the help of a simple one-to-two splitter (see Figure 1). The data service offered by a cable modem may be shared by up to sixteen users in a local-area network (LAN) configuration.


    Figure 1. Cable Modem at the Subscriber Location
    Because some cable networks are suited for broadcast television services, cable modems may use either a standard telephone line or a QPSK/16 QAM modem over a two-way cable system to transmit data upstream from a user location to the network. When a telephone line is used in conjunction with a one-way broadcast network, the cable data system is referred to as a telephony return interface (TRI) system. In this mode, a satellite or wireless cable television network can also function as a data network.
    At the cable headend, data from individual users is filtered by upstream demodulators (or telephone-return systems, as appropriate) for further processing by a cable modem termination system (CMTS). A CMTS is a data switching system specifically designed to route data from many cable modem users over a multiplexed network interface. Likewise, a CMTS receives data from the Internet and provides data switching necessary to route data to the cable modem users. Data from the network to a user group is sent to a 64/256 QAM modulator. The result is user data modulated into one 6-MHz channel, which is the spectrum allocated for a cable television channel such as ABC, NBC, or TBS for broadcast to all users (see Figure 2).


    Figure 2. Cable Modem Termination System and Cable Headend Transmission
    A cable headend combines the downstream data channels with the video, pay-per-view, audio, and local advertiser programs that are received by television subscribers. The combined signal is then transmitted throughout the cable distribution network. At the user location, the television signal is received by a set-top box, while user data is separately received by a cable modem box and sent to a PC.
    A CMTS is an important new element for support of data services that integrates upstream and downstream communication over a cable data network. The number of upstream and downstream channels in a given CMTS can be engineered based on serving area, number of users, data rates offered to each user, and available spectrum.
    Another important element in the operations and day-to-day management of a cable data system is an element management system (EMS). An EMS is an operations system designed specifically to configure and manage a CMTS and associated cable modem subscribers. The operations tasks include provisioning, day-to-day administration, monitoring, alarms, and testing of various components of a CMTS. From a central network operations center (NOC), a single EMS can support many CMTS systems in the geographic region.




    Figure 3. Operations and Management of Cable Data Systems Cable Data System Features
    Beyond modulation and demodulation, a cable modem incorporates many features necessary to extend broadband communications to wide-area networks (WANs). The network layer is chosen as Internet protocol (IP) to support the Internet and World Wide Web services. The data link layer is comprised of three sublayers: logical link control sublayer, link security sublayer conforming to the security requirements, and media access control (MAC) sublayer suitable for cable system operations. Current cable modem systems use Ethernet frame format for data transmission over upstream and downstream data channels. Each of the downstream data channels and the associated upstream data channels on a cable network form an extended Ethernet WAN. As the number of subscribers increases, a cable operator can add more upstream and downstream data channels to support demand for additional bandwidth in the cable data network. From this perspective, growth of new cable data networks can be managed in much the same fashion as the growth of Ethernet LANs within a corporate environment.
    The link security sublayer requirements are further defined in three sets of requirements: baseline privacy interface (BPI), security system interface (SSI), and removable security module interface (RSMI). BPI provides cable modem users with data privacy across the cable network by encrypting data traffic between the user's cable modem and CMTS. The operational support provided by the EMS allows a CMTS to map a cable modem identity to paying subscribers and thereby authorize subscriber access to data network services. Thus, the privacy and security requirements protect user data as well as prevent theft of cable data services.
    Early discussions in the Institute of Electrical and Electronic Engineers (IEEE) 802.14 Committee referred to the use of asynchronous transfer mode (ATM) over cable data networks to facilitate multiple services including telephone, data, and video, all of which are supported over cable modems. Although current cable modem standards incorporate Ethernet over cable modem, extensions are provided in the standards for future support of ATM or other protocol data units. IP–telephony support over cable data networks is expected to be a new value-added service in the near term.
    Cable Data Network Architecture
    Cable data network architecture is similar to that of an office LAN. A CMTS provides an extended Ethernet network over a WAN with a geographic reach up to 100 miles. The cable data network may be fully managed by the local cable operations unit. Alternatively, all operations may be aggregated at a regional data center to realize economies of scale. A given geographic or metropolitan region may have a few cable television headend locations that are connected together by fiber links. The day-to-day operations and management of a cable data network may be consolidated at a single location, such as a super hub, while other headend locations may be economically managed as basic hubs (see Figure 4).


    Figure 4. Basic Distribution Hub
    A basic distribution hub is a minimal data network configuration that exists within a cable television headend. A typical headend is equipped with satellite receivers, fiber connections to other regional headend locations, and upstream RF receivers for pay-per-view and data services. The minimal data network configuration includes a CMTS system capable of upstream and downstream data transport and an IP router to connect to the super hub location (see Figure 5).


    Figure 5. Super Hub
    A super hub is a cable headend location with additional temperature-controlled facilities to house a variety of computer servers, which are necessary to run cable data networks. The servers include file transfer, user authorization and accounting, log control (syslog), IP address assignment and administration (DHCP servers), DNS servers, and data over cable service interface specifications (DOCSIS) control servers. In addition, a super hub may deploy operations support and network management systems necessary for the television as well as data network operations.
    User data from basic and super hub locations is received at a regional data center for further aggregation and distribution throughout the network (see Figure 6). A super hub supports dynamic host configuration protocol (DHCP), DNS (domain name server), and log control servers necessary for the cable data network administration. A regional data center provides connectivity to the Internet and the World Wide Web and contains the server farms necessary to support Internet services. These servers include e-mail, Web hosting, news, chat, proxy, caching, and streaming media servers.


    Figure 6. Regional Data Center
    In addition to cable data networks, a regional data center may also support dial-up modem services (e.g., 56–kbps service) and business-to-business Internet services. A network of switching, routers, and servers is employed at the regional data center to aggregate dial-up, high-speed, and business Internet services.
    A super hub and a regional data center may be co-located and managed as a single business entity. A super hub is managed by a cable television service provider (TCI), while the regional data center is managed as a separate and independent business (@Home). In some regions, an existing Internet service provider (ISP) may provide regional data center support for many basic and super hub locations managed by independent cable data network providers.
    A regional data center is connected to other regional data centers by a national backbone network . In addition, each regional data center is also connected to the Internet and World Wide Web services. Traffic between the regional networks, the Internet and all other regional networks is aggregated through the regional data center.




    Cable Data Network Standards
    A cable data system is comprised of many different technologies and standards. To develop a mass market for cable modems, products from different vendors must be interoperable.
    To accomplish the task of interoperable systems, the North American cable television operators formed a limited partnership, Multimedia Cable Network System (MCNS), and developed an initial set of cable modem requirements (DOCSIS). MCNS was initially formed by Comcast, Cox, TCI, Time Warner, Continental (now MediaOne), Rogers Cable, and CableLabs. The DOCSIS requirements are now managed by CableLabs. Vendor equipment compliance to the DOCSIS requirements and interoperability tests are administered by a CableLabs certification program.
    For further details see http://www.cablemodem.com.
    Some of the details of cable modem requirements are listed below.

    Physical Layer

    Downstream Data Channel
    At the cable modem physical layer, downstream data channel is based on North American digital video specifications (i.e., International Telecommunications Union [ITU]–T Recommendation J.83 Annex B) and includes the following features:
    • 64 and 256 QAM
    • 6 MHz–occupied spectrum that coexists with other signals in cable plant
    • concatenation of Reed-Solomon block code and Trellis code, supports operation in a higher percentage of the North American cable plants
    • variable length interleaving supports, both latency-sensitive and latency-insensitive data services
    • contiguous serial bit-stream with no implied framing, provides complete physical (PHY) and MAC layer decoupling
    Upstream Data Channel
    The upstream data channel is a shared channel featuring the following:
    • QPSK and 16 QAM formats
    • multiple symbol rates
    • data rates from 320 kbps to 10 Mbps
    • flexible and programmable cable modem under control of CMTS
    • frequency agility
    • time-division multiple access
    • support of both fixed-frame and variable-length protocol data units
    • programmable Reed-Solomon block coding
    • programmable preambles

    MAC Layer

    The MAC layer provides the general requirements for many cable modem subscribers to share a single upstream data channel for transmission to the network. These requirements include collision detection and retransmission. The large geographic reach of a cable data network poses special problems as a result of the transmission delay between users close to headend versus users at a distance from cable headend. To compensate for cable losses and delay as a result of distance, the MAC layer performs ranging, by which each cable modem can assess time delay in transmitting to the headend. The MAC layer supports timing and synchronization, bandwidth allocation to cable modems at the control of CMTS, error detection, handling and error recovery, and procedures for registering new cable modems.

    Privacy

    Privacy of user data is achieved by encrypting link-layer data between cable modems and CMTS. Cable modems and CMTS headend controller encrypt the payload data of link-layer frames transmitted on the cable network. A set of security parameters including keying data is assigned to a cable modem by the Security Association (SA). All of the upstream transmissions from a cable modem travel across a single upstream data channel and are received by the CMTS. In the downstream data channel a CMTS must select appropriate SA based on the destination address of the target cable modem. Baseline privacy employs the data encryption standard (DES) block cipher for encryption of user data. The encryption can be integrated directly within the MAC hardware and software interface.

    Network Layer

    Cable data networks use IP for communication from the cable modem to the network. The Internet Engineering Task Force (IETF) DHCP forms the basis for all IP address assignment and administration in the cable network. A network address translation (NAT) system may be used to map multiple computers that use a single high-speed access via cable modem.

    Transport Layer

    Cable data networks support both transmission control protocol (TCP) and user datagram protocol (UDP) at the transport layer.

    Application Layer

    All of the Internet-related applications are supported here. These applications include e-mail, ftp, tftp, http, news, chat, and signaling network management protocol (SNMP). The use of SNMP provides for management of the CMTS and cable data networks.

    Operations System

    The operations support system interface (OSSI) requirements of DOCSIS specify how a cable data network is managed. To date, the requirements specify an RF MIB. This enables system vendors to develop an EMS to support spectrum management, subscriber management, billing, and other operations.