mlppp设计指南

更新时间:2023-06-26 09:36:36 阅读: 评论:0

20. Multi-Link PPP Specification
两个小男孩
20.1  Revision History
20.2  Introduction
This document describes the multi-link PPP module.
20.2.1  References
IETF RFC 1570 - PPP LCP Extensions
IETF RFC 1661 - The Point-to-Point Protocol
IETF RFC 1990 - The PPP Multilink Protocol (MP)
IETF RFC 2686 - The Multi-Class Extension to Multi-Link PPP
HDLC/PPP Design Documentation, Wintegra, Inc.
20.2.2  Acronyms
大学读书报告
Table 20-1: Acronyms
Name Definition
Bundle    A t of PPP links that makes up one “group” of multilink PPP
MCMP Multiclass MP
ML-PPP Same as MP
MP Multi-link PPP
MRU Maximum Received Unit in number of octets - In MP, MRU is荷花开花季节
the maximum fragmentation size for a PPP link within an MP
bundle. It does not contain the PPP header, only the Informa-
tion Field and Padding octets.
MRRU Maximum Received Reconstructed Unit in number of octets -
the maximum size of the reconstructed received packet. Typi-
cally, MRRU will be around 1500 bytes for IP packets.
frag_unit This is the fragmentation size ud by ML-PPP. This can be
less than or equal to the negotiated MRU for the individual
links.
20.3  Key Features and Specifications
•General:  • Supports up to 32 PPP T1/TDI links grouped into 32 bundles total over a maximum of four WinPaths. Maximum bandwidth of 64 Mbps for the entire system.
• Support for up to 64k active IP flow from IWF module with QoS enabled over four different traffic pri
orities.
• Supports Interworking or host interfaces for packet flow.
• Supports round robin fragmentation and transmit across a bundle, maximizing bandwidth usage of the links and minimizing relative delay.
• Supports nding of filler packets (zero size packets with quence number updates) to resyn-chronize the links during idle periods.
• Supports different fragment sizes over different links in a bundle.
• Supports packet reordering across multiple links.
• Conforms to IETF RFC1990.
凉拌芹菜的做法
• Supports multiclass MP including prefix elision for a maximum of 16 class, but the number of priorities within the class is limited to 4.
• Interfaces:  • Code reu with other interfaces (PPP/HDLC, IW, PSU) is done as efficient as possibl
e.  • Allows future expansion of PPPmux and other PPP extensions to interface directly.• Other Requirements:  • ML-PPP must reside and to be executed on the same WinPath as IWF module/PSU in order to relea the buffer back into the free buffer pool ud by IWF.
• Each individual HDLC/PPP is configured to have only two traffic flows when connected to a ML-PPP bundle: one for ML-PPP, the other for the host.
20.4  Multi-link PPP Architecture and Interface
The multi-link PPP is an aggregation of multiple PPP links. As each ML-PPP packet arrives, it is inrted in a buffer space, indexed by the quence number. Each ML-PPP packet contains either a fragment or a complete PPP packet. The ML-PPP does fragmentation and reasmbly of PPP packets. This concept is illustrated in the following figure:Buffer Overflow    A situation where the receive buffer space is not empty and an
attempt is made to write over the buffer space with a valid
fragment packet.
SDP
Self-Describing Padding Table 20-1: Acronyms
Figure 20-1: Packet Flow Diagram of Multi-Link PPP
他们先杀了我父亲
简笔漫画人物
The ML-PPP module can be capable of running on up to 4 WinPaths supporting up to 32 TDM links. Esntially, only one WinPath would run the main ML-PPP thread that deals with gmen-tation and reasmbly of PPP packets for a given bundle. Each WinPath will have the PPP module (and other required modules such as the PSU module) to rvice the links that are active currently. The interface module looks at the quence number of each ML-PPP packet and copies the data into the appropriate slot in the memory. Thus, there are no interactions between the links in the interface module and therefore, thread contentions are eliminated.
The gmentation part is fairly straightforward. It takes the packets from the host or the Inter-working module via the PSU functions and simply fragments them and enqueue the data to each of the PPP links in a “round robin” way. The sizes of the fragments per link will be predetermined by the host usually bad on the relative link speed of the modules. The fragments should be in groups of 8 bytes to conform with standards.  The discussion of SDP is given later in the docu-ment. (currently SDP is not implemented)
催芽方法Each link of the PPP can have a different fragment size. This is ud to accommodate differences in the link’s transmission rate. Thus, if link #0 is 1.5 Mbps, and link #1 is 1 Mbps, then the frag-ment size for link #0 could be 66 bytes and the link fragment size for #1 should be 44 bytes (2/3 of link #0).
The choice of fragment sizes of individual links can have significant impact on the utili-zation of ML-PPP. For better efficiency the fragmentation sizes of each links should be a good
reflection of the latency of the link. This helps in balanced data distribution over all the links in bundle.
The host is responsible for tting up links and tearing down links and following protocol proce-dures as described by the RFC’s. The DPS is only responsible for delivery and reception of pack-host receives a MRRU option as part of its LCP negotiation, the host will determine whether this link belongs to a specific t of bundles.  That is, if other options, such as End Point Discriminator or the Peer Name (when the authentication protocol of the PPP is ud) is available, the host will put the link with other links that have the same options.  If not, then the link is grouped with a default bundle.  The host communicates directly to each HDLC/PPP link as described in ction “Host Interface” on page876
At this time, it is expected that the bandwidth requirements for each individual link is the total aggreg
ate of the system shall not exceed 64 Mbits/c.
The ML-PPP to HDLC/PPP data interface contains a single transmit queue per PPP link.  The
for the particular link.  On the receive side, a buffer space indexed using quence numbers is allocated on a per bundle basis.  This buffer allows out of order packets to be reconstructed due to the relative delay between the links.
The PSU interacts directly with the ML-PPP module and the Interworking module, supporting up to 64k flows.  The interface will be very similar to that of the PPP/HDLC module.  The details are described in the WinPath PSU documentation.  This requires that Interworking, ML-PPP, and PSU as well as other required modules except the PPP links on other WinPaths, must be executing on the same WinPath.
Whenever possible, the ML-PPP reus code from the PPP/HDLC modules for interfacing with Interworking/PSU and/or host.
胡子英语20.4.1  ML-PPP Receive Buffer Architecture
The receive buffer architecture for multi-link PPP consists of multiple buffers, indexed by a sub-t of
the quence number.  Each bundle requires a parate receive buffer.  The figure below dis-plays the architecture of the receive buffer.
Figure 20-2: Receive Buffer Memory Layout
The size of each of the buffer is fixed, and each must be the maximum fragment size of all PPP links. Associated with each buffer is a field that indicates the size of the packet received.
Each descriptor points to a space allocated by the host. This allows flexible memory allocation. The interface module with the PPP module will examine the quence number of the ML-PPP packet and inrt the packet into the right buffer shown above.
The quence number N in the buffer space is a subt of the actual quence space, which is either 12 bits or 24 bits depending on whether short quence number fragment format is ud.  The quence space must be a power of 2, to allow easy memory calculation. The implementation should allow a large enough size of the buffer to account for differential delay and jitter of the transmission since the links are not synchronous. However, there is no maximum size possible that would be able to buffer the links if the transmission side is violating the generic rules that RFC1990 describes, nor could it be enough if a single physical link breaks during an active trans-mission. Its size should be lected bad on the rate of the transmission and the expected normal load of the PPP buffers to allow enough buffers and fragments to accumulate before the thread process them.

本文发布于:2023-06-26 09:36:36,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/82/1042796.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:读书   荷花   芹菜   简笔   方法
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图