Posts Tagged ‘TCP

TCP/Process Communication

In order to send a message, a process sets up its text in a buffer region in its own address space,inserts the requisite control information (described in the following list) in a transmit control block (TCB) and passes control to the TCP. The exact form of a TCB is not specified here, but it might take the form of a passed pointer, a pseudo interrupt, or various other forms. To receive a message in its address space, a process sets up a receive buffer, inserts the requisite control information in a receive control block (RCB) and again passes control to the TCP.

Fig. 1. Conceptual TCB format.

In some simple systems, the buffer space may in fact be provided by the TCP. For simplicity we assume that a ring buffer is used by each process,but other structures (e.g., buffer chaining) are not ruled out. A possible format for the TCB is shown in Fig.1. The TCB contains information necessary to allow the TCP to extract and send the process data.Some of the information might be implicitly known,but we are not concerned with that level of detail.The various fields in the TCB are described as follows.

  1. Source Address: This is the full net/HOST/TCP/port address of the transmitter.
  2. Destination Address: This is the full net/HOST/TCP/port of the receiver.
  3. Next Packet Sequence Number: This is the sequence number to be used for the next packet the TCP will transmit from this port.
  4. Current Buffer Size: This is the present size of the process transmit buffer.
  5. Next Write Position: This is the address of the next position in the buffer at which the process can place new data for transmission.
  6. Next Read Position: This is the address at which the TCP should begin reading to build the next segment for output.
  7. End Read Position: This is the address at which the TCP should halt transmission. Initially 6 and 7 bound the message which the process wishes to transmit.
  8. Number of Re-transmissions/Maximum Re-transmissions: These fields enable the TCP to keep track of the number of times it has re-transmitted the data and could be omitted if the TCP is not to give up.
  9. Timeout/Flags: The timeout field specifies the delay after which unacknowledged data should be re-transmitted. The flag field is used for semaphores and other TCP/process synchronization status reporting, etc.
  10. Current Acknowledgment/Window: The current acknowledgment field identifies the first byte of data still unacknowledged by the destination TCP.

The read and write positions move circularly around the transmit buffer, with the write position always to the left (module the buffer size) of the read position. The next packet sequence number should be constrained to be less than or equal to the sum of the current acknowledgment and the window fields. In any event, the next sequence number should not exceed the sum of the current acknowledgment and half of the maximum possible sequence number (to avoid confusing the receiver’s duplicate detection algorithm). A possible buffer layout is shown in Fig.2.

 

Fig. 2. Transmit buffer layout.

The RCB is substantially the same, except that the end read field is replaced by a partial segment check-sum register which permits the receiving TCP to compute and remember partial check sums in the event that a segment arrives in several packets. When the final packet of the segment arrives, the TCP can verify the check sum and if successful, acknowledge the segment.

Tags : , , , , , , , , , , , , , , ,

The Multiplexing Transport Protocol Suite

The two transport protocols most commonly used in the Internet are TCP, which offers a reliable stream, and UDP, which offers a connectionless datagram service. We do not offer a connectionless protocol, because the mechanisms of a rate-based protocol need a longer-lived connection to work, as they use feedback from the receiver. The interarrival time of packets is measured at the receiver and is crucial for estimating the available bandwidth and for discriminating congestion and transmission losses. On the other hand, a multiplexing unreliable protocol that offers congestion control can be used as a basis of other protocols. The regularity of a rate-based protocol lends itself naturally to multimedia applications. Sound and video need bounds on arrival time so that the playback can be done smoothly. A multimedia protocol is the natural offshoot. Most multimedia applications need timely data. Data received after the playback time is useless. Moreover, for a system with bandwidth constraints, late data is adverse to the quality of playback, as it robs bandwidth from the flow. There are many strategies to deal with losses, from forgiving applications to forward error correction (FEC) schemes. Retransmissions are rarely used, because they take the place of new data, and the time to send a request and receive the retransmission may exceed the timing constraints.

When multiple channels are available, and the aggregated bandwidth is greater than the bandwidth necessary to transmit the multimedia stream, retransmissions can be done successfully without harming the quality of playback. The simultaneous use of multiple link layers generates extra bandwidth. The best-case scenario is the coupling of a low bandwidth, low delay interface with a
high bandwidth, high delay interface. The high bandwidth interface allows for a good quality stream, while the low delay interface makes retransmissions possible by creating a good feedback channel to request (and transmit) lost frames.

When the aggregated bandwidth is not enough to transmit packets at the rate required by the application, packets have to be dropped or the application has to change the characteristics of its stream. Adapting applications can change the quality of the stream on the fly to deal with bandwidth variations, but for non-adapting applications, the best policy is to drop packets at the sender. Sending packets that will arrive late will cause further problems by making other packets late, which can have a snowball effect.

In contrast to a multimedia protocol, a reliable protocol has to deliver intact every packet that the application sent. In this case, time is not the most important factor. Lost or damaged frames will have to be retransmitted until they are successfully received. If the application expects the data to be received in the same order it was sent, the protocol will have to buffer packets received after a loss until the lost packet retransmission is received. Using the channel abstraction to multiplex the data increases the occurrence of out-of-order deliver, increasing the burden in the receiving end.

 

 

Tags : , , , , , , , , , , , ,

Realization TRIP for NGGK

We design to test the system performance, NGGK (Next Generation Gatekeeper) supporting TRIP (Telephony Routing over IP) is realized based on VOVIDA TRIP protocol stack 1.0.0, which is an open source stack.All four TRIP messages above are implemented along with following NGGK functionality:

In realization, we find that NGGK pairs may initiate a transport connection to the peer and close it at the same time. In case, two NGGKs will bother with connection conflicts without success for long time. It should automatically close the connection of a NGGK with low IP address after comparison.

In the testing system showed in Fig. 1, there are six modules for implementation of  TRIP in system: TRIP Level Management, TRIB, FSM, Policy Center, Message Center and System Service. TRIP Level Management couples TCP transport layer to receive network events, accepts the initial system parameters such as timer, policies from real time applications. TRIB comprises four types databases to store corresponding routingin formation. FSM is the core module, which provides a basic framework of TRIP operation, it has six states: Idle,Connect, Active, OpenSent, OpenConfirm, Established. Policy Center is responsible for policy driven messages operation interior a NGGK, drives FSM and makes TRIB update in a designated way. Message Center accepts message from TRIP Level Management, parses them to detect if any error occurs, and makes responses. System Service module invokes APIs of VOVIDA TRIP stack and provides basic API service for all other modules.

Fig. 1. Module architecture in the testing system using TRIP for NGGK routing language

Tags : , , , , , , , , , , , , , , , , , , , , , , ,