局域网交换机中英文对照外文翻译文献

更新时间:2023-07-12 05:55:26 阅读: 评论:0

中英文资料外文翻译文献
英文:
LAN Switch Architecture
大学生演讲
This chapter introduces many of the concepts behind LAN switching common to all switch vendors. The chapter begins by looking at how data are received by a switch, followed by mechanisms ud to switch data as efficiently as possible, and concludes with forwarding data toward their destinations. The concepts are not specific to Cisco and are valid when examining the capabilities of any LAN switch.
1. Receiving Data—Switching Modes
The first step in LAN switching is receiving the frame or packet, depending on the capabilities of the switch, from the transmitting device or host. Switches making forwarding decisions only at Layer 2 of the OSI model refer to data as frames, while switches making forwarding decisions at Layer 3 and above refer to data as packets. This chapter's examination of switching begins from a Layer 2 point of view. Depending on the model, varying amounts of each frame are stored and examined before being switched.
Three types of switching modes have been supported on Catalyst switches:
•Cut through
•Fragment free
•Store and forward
The three switching modes differ in how much of the frame is received and examined by the switch before a forwarding decision is made. The next ctions describe each mode in detail.
1.1 Cut-Through Mode
Switches operating in cut-through mode receive and examine only the first 6 bytes of a frame. The first 6 bytes reprent the destination MAC address of the frame, which is sufficient information to make a forwarding decision. Although cut-through switching offers the least latency when transmitting frames, it is susceptible to transmitting fragments created via Ethernet collisions, runts (frames less than 64 bytes), or damaged frames.
1.2 Fragment-Free Mode
溺爱调教
Switches operating in fragment-free mode receive and examine the first 64 bytes of frame. Fragment free is referred to as "fast forward" mode in some Cisco Catalyst documentation. Why examine 64 bytes? In a properly designed Ethernet network, collision fragments must be detected in the first 64 bytes.
1.3 Store-and-Forward Mode
Switches operating in store-and-forward mode receive and examine the entire frame, resulting in the most error-free type of switching.
As switches utilizing faster processor and application-specific integrated circuits (ASICs) were introduced, the need to support cut-through and fragment-free switching was no longer necessary. As a result, all new Cisco Catalyst switches utilize store-and-forward switching.
Figure2-1 compares each of the switching modes.
Figure2-1.Switching Modes
2. Switching Data
Regardless of how many bytes of each frame are examined by the switch, the frame must eventually be switched from the input or ingress port to one or more output or egress ports. A switch fabric is a general term for the communication channels ud by the switch to transport frames, carry forwarding decision information, and relay management information throughout the switch. A comparison could be made between the switching fabric in a Catalyst switch and a transmission on an automobile. In an automobile, the transmission is responsible for relaying power from the engine to the wheels of the car. In a Catalyst switch, the switch fabric is responsible for relaying frames from an input or ingress port to one or more output or egress ports. Regardless of model, whenever a new switching platform is introduced, the documentation will generally refer to the "transmission" as the switching fabric.
Although a variety of techniques have been ud to implement switching fabrics on Cisco Catalyst platforms, two major architectures of switch fabrics are common:
•Shared bus
•Crossbar
2.1 Shared Bus Switching
In a shared bus architecture, all line modules in the switch share one data path. A central arbiter determines how and when to grant requests for access to the bus from each line card. Various methods of achieving fairness can be ud by the arbiter depending on the configuration of the switch. A shared bus architecture is much like multiple lines at an airport ticket counter, with only one ticketing agent processing customers at any given time.
Figure2-2illustrates a round-robin rvicing of frames as they enter a switch. Round-robin is the simplest method of rvicing frames in the order in which they are received. Current Catalyst switching platforms such as the Catalyst 6500 support a variety of quality of rvice (QoS) features to provide priority rvice to specified traffic flows.
Figure 2-2. Round-Robin Service Order
The following list and Figure 2-3 illustrate the basic concept of moving frames from the received port or ingress, to the transmit port(s) or egress using a shared bus architecture:
Frame received from Host1—The ingress port on the switch receives the entire frame from Host1 and stores it in a receive buffer. The port checks the frame's Frame Check Sequence (FCS) for errors. If the frame is defective (runt, fragment, invalid CRC, or Giant), the port discards the frame an
d increments the appropriate counter.
Requesting access to the data bus—A header containing information necessary to make a forwarding decision is added to the frame. The line card then requests access or permission to transmit the frame onto
the data bus.
Frame transmitted onto the data bus— After the central arbiter grants access, the frame is transmitted onto the data bus.
Frame is received by all ports— In a shared bus architecture, every frame transmitted is received by all ports simultaneously. In addition, the frame is received by the hardware necessary to make a forwarding decision.
Switch determines which port(s) should transmit the frame— The information added to the frame in step 2 is ud to determine which ports should transmit the frame. In some cas, frames with either an unknown destination MAC address or a broadcast frame, the switch will transmit the frame out all ports except the one on which the frame was received.
Port(s) instructed to transmit, remaining ports discard the frame— Bad on the decision in step 5, a certain port or ports is told to transmit the frame while the rest are told to discard or flush the frame.
Egress port transmits the frame to Host2—In this example, it is assumed that the location of Host2 is known to the switch and only the port connecting to Host2 transmits the frame.
One advantage of a shared bus architecture is every port except the ingress port receives a copy of the frame automatically, easily enabling multicast and broadcast traffic without the need to replicate the frames for each port. This example is greatly simplified and will be discusd in detail for Catalyst platforms that utilize a shared bus architecture in Chapter 3, "Catalyst Switching Architecture."
Figure 2-3. Frame Flow in a Shared Bus
2.2 Crossbar Switching
In the shared bus architecture example, the speed of the shared data bus determines much of the overall traffic handling capacity of the switch. Becau the bus is shared, line cards must wait their turns to communicate, and this limits overall bandwidth.
A solution to the limitations impod by the shared bus architecture is the implementation of a crossbar switch fabric, as shown in Figure 2-4. The term crossbar means different things on different switch platforms, but esntially indicates multiple data channels or paths between line cards that can be ud simultaneously.
In the ca of the Cisco Catalyst 5500 ries, one of the first crossbar architectures advertid by Cisco, three individual 1.2-Gbps data bus are implemented. Newer Catalyst 5500 ries line cards have the necessary connector pins to connect to all three bus simultaneously, taking advantage of 3.6 Gbps of aggregate bandwidth. Legacy line cards from the Catalyst 5000 are still compatible with the Catalyst 5500 ries by connecting to only one of the three data bus. Access to all three bus is required by Gigabit Ethernet cards on the Catalyst 5500 platform.舞蹈艺术有哪些
薄唇A crossbar fabric on the Catalyst 6500 ries is enabled with the Switch Fabric Module (SFM) and Switch Fabric Module 2 (SFM2). The SFM provides 128 Gbps of bandwidth (256 Gbps full duplex) to line cards via 16 individual 8-Gbps connections to the crossbar switch fabric. The SFM2 was introduced to support the Catalyst 6513 13-slot chassis and includes architecture optimizations over the SFM.
Figure 2-4. Crossbar Switch Fabric
3. Buffering Data
Frames must wait their turn for the central arbiter before being transmitted in shared bus architectures. Frames can also potentially be delayed when congestion occurs in a crossbar switch fabric. As a result, frames must be buffered until transmitted. Without an effective buffering scheme, frames are more likely to be dropped anytime traffic oversubscription or congestion occurs.
Buffers get ud when more traffic is forwarded to a port than it can transmit. Reasons for this include the following:
•Speed mismatch between ingress and egress ports
•Multiple input ports feeding a single output port
•Half-duplex collisions on an output port
•  A combination of all the above
To prevent frames from being dropped, two common types of memory management are ud with Catalyst switches:邱玉
•Port buffered memory
•Shared memory
3.1 Port Buffered Memory
Switches utilizing port buffered memory, such as the Catalyst 5000, provide each Ethernet port with a certain amount of high-speed memory to buffer frames until transmitted. A disadvantage of port buffered memory is the dropping of frames when a port runs out of buffers. One method of maximizing the benefits of buffers is the u of flexible buffer sizes. Catalyst 5000 Ethernet line card port buffer memory is flexible and can create frame buffers for any frame size, making the most of the available buffer memory. Catalyst 5000 Ethernet cards that u the SAINT ASIC contain 192 KB of buffer memory per port, 24 kbps for receive or input buffers, and 168 KB for transmit or output buffers.
Using the 168 KB of transmit buffers, each port can create as many as 2500 64-byte buffers. With most of the buffers in u as an output queue, the Catalyst 5000 family has eliminated head-of-line blocking issues. (You learn more about head-of-line blocking later in this chapter in the ction "Congestion and Head-of-Line Blocking.") In normal operations, the input queue is never ud for m
ore than one frame, becau the switching bus runs at a high speed.
Figure 2-5illustrates port buffered memory.
Figure 2-5. Port Buffered Memory
3.2 Shared Memory
Some of the earliest Cisco switches u a shared memory design for port buffering. Switches using a shared memory architecture provide all ports access to that memory at the same time in the form of shared frame or packet buffers. All ingress frames are stored in a shared memory "pool" until the egress ports are ready to transmit. Switches dynamically allocate the shared memory in the form of buffers, accommodating ports with high amounts of ingress traffic, without allocating unnecessary buffers for idle ports.
The Catalyst 1200 ries switch is an early example of a shared memory switch. The Catalyst 1200 supports both Ethernet and FDDI and has 4 MB of shared packet dynamic random-access memory (DRAM). Packets are handled first in, first out (FIFO).
More recent examples of switches using shared memory architectures are the Catalyst 4000 and 45
00 ries switches. The Catalyst 4000 with a Supervisor I utilizes 8 MB of Static RAM (SRAM) as dynamic frame buffers. All frames are switched using a central processor or ASIC and are stored in packet buffers until
副国级退休待遇switched. The Catalyst 4000 Supervisor I can create approximately 4000 shared packet buffers. The Catalyst 4500 Supervisor IV, for example, utilizes 16 MB of SRAM for packet buffers. Shared memory buffer sizes may vary depending on the platform, but are most often allocated in increments ranging from 64 to 256 bytes. Figure 2-6 illustrates how incoming frames are stored in 64-byte increments in shared memory until switched by the switching engine.
Figure 2-6. Shared Memory Architecture
4. Oversubscribing the Switch Fabric
Switch manufacturers u the term non-blocking to indicate that some or all the switched ports have connections to the switch fabric equal to their line speed. For example, an 8-port Gigabit Ethernet module would require 8 Gb of bandwidth into the switch fabric for the ports to be considered non-blocking. All but the highest end switching platforms and configurations have the potential of oversubscribing access to the switching fabric.
Depending on the application, oversubscribing ports may or may not be an issue. For example, a 10/100/1000 48-port Gigabit Ethernet module with all ports running at 1 Gbps would require 48 Gbps of bandwidth into the switch fabric. If many or all ports were connected to high-speed file rvers capable of generating consistent streams of traffic, this one-line module could outstrip the bandwidth of the entire switching fabric. If the module is connected entirely to end-ur workstations with lower bandwidth requirements, a card that oversubscribes the switch fabric may not significantly impact performance. Cisco offers both non-blocking and blocking configurations on various platforms, depending on bandwidth requirements. Check the specifications of each platform and the available line cards to determine the aggregate bandwidth of the connection into the switch fabric.
5. Congestion and Head-of-Line Blocking
Head-of-line blocking occurs whenever traffic waiting to be transmitted prevents or blocks traffic destined elwhere from being transmitted. Head-of-line blocking occurs most often when multiple high-speed data sources are nding to the same destination. In the earlier shared bus example, the central arbiter ud the round-robin rvice approach to moving traffic from one line card to another. Ports on each line card request access to transmit via a local arbiter. In turn, each line card's local arbiter waits its turn for the central arbiter to grant access to the switching bus. Once access is grant
小米手机自动开关机怎么设置ed to the transmitting line card, the central arbiter has to wait for the receiving line card to fully receive the frames before rvicing the next request in line. The situation is not much different than needing to make a simple deposit at a bank having one teller and many lines, while the person being helped is conducting a complex transaction.
In Figure 2-7, a congestion scenario is created using a traffic generator. Port 1 on the traffic generator is connected to Port 1 on the switch, generating traffic at a 50 percent rate, destined for both Ports 3 and 4. Port 2 on the traffic generator is connected to Port 2 on the switch, generating traffic at a 100 percent rate, destined for only Port 4. This situation creates congestion for traffic destined to be forwarded by Port 4 on the switch becau traffic equal to 150 percent of the forwarding capabilities of that port is being nt. Without proper buffering and forwarding algorithms, traffic destined to be transmitted by Port 3 on the switch may have to wait until the congestion on Port 4 clears.
Figure 2-7. Head-of-Line Blocking
开卷有益作文Head-of-line blocking can also be experienced with crossbar switch fabrics becau many, if not all, line

本文发布于:2023-07-12 05:55:26,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/89/1078041.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:手机   资料   退休   演讲   艺术   翻译   待遇
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图