Switch is an indispensable network device for data center migration, and plays an important role in data center. In the normal use and the purchase, most are concerned about the performance of switch backplane bandwidth, port density, single port speed, protocol characteristics and so on, little attention has been paid to the cache index, which is an often overlooked indicator. In fact, cache is an important performance parameter of data center switch, and it is an important index to measure the performance of a switch device. Different switch cache and cache cache usually, usually refers to when a hardware to read the data, will be the first to find the required data from the cache, if found directly, you can't find it again from memory to find, apparently in the data cache lookup speed ratio to memory much faster, this is a block of memory address space inside the CPU. On the switch, the cache is the data exchange buffer, sometimes called the packet buffer size, is a queue structure that is used by switches to coordinate the speed matching between different network devices. Burst data can be stored in a buffer until it is processed by a slow device. The switch has three forwarding modes: through forward, forward, forward, and no fragment forwarding, and the most widely used is the store and forward mode. In fact, no matter what kind of forwarding mode should be used to cache, but only a few bytes before forwarding through the analytic message forwarding, to save the data in cache is a small amount of forwarding speed, but because there is no check on the overall data forwarding, error message easily. Most of the switch cache is not large, usually several MB to dozens of MB, although the single port bandwidth in less than ten years from 1G to 100G, but did not cache greatly improved, if a 100G port emergency traffic, ten MB appears obviously in the actual application of packet loss, there will be a limit, unless explicitly applications do not have bursty traffic flow.

Data centre migration
Well, someone might ask, "why is the cache bigger if the cache is so important?" In fact, the existing chip integration technology should not be difficult to implement. Indeed, the cache can theoretically be amplified by the chip design process, but the large cache data packet forwarding speed will affect the normal communication condition, because the buffer space is too large to relative addressing more time, and increase the cost of the equipment in some delay for application scenarios relatively high in the cache big but will be counterproductive, it is not simply to expand the cache, cache and delay in two aspects of choice, "can not have both fish and bear's paw". Of course, as technology advances, the cache capabilities of the switches can also be continuously increased without increasing latency. Due to the ability of clock and bus bandwidth, the performance of cache can not be improved greatly. Considering the balance of power and cost, the capacity of cache will not increase greatly. Some switches also hang a DRAM cache outside the switching chip to increase the cache capacity of the switch, so that the delay may be greater, but the cache can do a great deal more than 1G. Caching is important, but how much caching do we need, and there's no right answer?. The huge cache means that the network will not discard any traffic, but also means that the delay in the network increases, depending on the data center business to choose. For example, in our search, a search to find results in massive database, prone to network traffic burst, and even cause network congestion, in such network business will need to deploy large cache switching equipment; in the financial sector, especially the stock trading network, a nanosecond can bring huge gains. The delay or loss, such fields require very high on the network, not allowed to appear congestion, there is no need to how much of the cache, some financial data center also requires the use of low delay switch, delay control in nanosecond forwarding.
Caching is usually caused by varying network interface rates, sudden traffic, or multiple to one traffic. The most common problem is the sudden change in traffic from one to the other. For example, an application is built on multiple server cluster nodes. If one of the nodes simultaneously requests data from all other nodes, then all replies should arrive at the same time. When this happens, all network traffic floods flood the requester's switch ports. If the switch does not have enough export buffers, it may discard some traffic or increase application latency. Sufficient network buffers prevent packet loss or network latency due to low-level protocols. The cache is an overall concept of the switch, the switch chip shared cache, how many points of each port can be adjusted, the switch of the cache is managed, so there are two kinds of modes: QOS mode and FC mode. Each packet is stored, processed and forwarded, but the storage space is limited, so packet loss occurs when the cache is insufficient. QOS mode, will not give a fluidic frame when the congestion occurs, but can be scheduled to port on the different priority traffic, we must first lose loss, low priority packets, by setting the configuration can achieve selective packet loss.

arrow
arrow
    文章標籤
    Data centre migration
    全站熱搜
    創作者介紹
    創作者 hank 的頭像
    hank

    websitedesign

    hank 發表在 痞客邦 留言(0) 人氣()