Initiator WWN Initiator Portal Initiator IQN Target Target WWN Target Portal Target IQN Here's the command syntax and pureport list -initiator This is a nice way to easily verify if you connected to some rogue port fixed at a lower speed. Note that at the bottom of the connections we list our own ports as "Target Ports" and show our connection speed. Open our UI and click on SYSTEM -> Connections and you should see the below: By adding more physical paths, you help maintain oversubscription you provide more pathways, more resiliency, more performance, mitigation of physical problems, and last but not least you take better advantage of our CPU allocation.
Why? Storage devices are often oversubscribed in today's SAN. You will need to make sure that you balance between maximizing connections to the Pure Storage FlashArray, and any host limitation you may have on the number of connections. In other words, err on the side of too much bandwidth instead of too little.Īssuming that you have plenty of network ports, please do avail yourself of all of Pure's ports. The cumulative impact of additional CPU overhead is another factor when laying out your iSCSI network.
In addition to time spent determining that data was dropped, the retransmission uses network bandwidth that could otherwise be used for new transactions.īe aware that with software-initiated iSCSI and NFS the network protocol processing takes place on the host system, and thus these might require more CPU resources than other storage options. Recovering from these dropped network packets results in large performance degradation. Any time a number of links transmitting near capacity are switched to a smaller number of links, such oversubscription is a possibility. From VMware's Best Practices (emphasis is mine):įor iSCSI and NFS, make sure that your network topology does not contain Ethernet bottlenecks, where multiple links are routed through fewer links, potentially resulting in oversubscription and dropped network packets. This configuration is particularly devastating for iSCSI. In one support case, each chassis only had two iSCSI connections to the core switch, providing, in the real world use, substantially less than 20Gb of bandwidth for all 64 hosts. This is the driving force behind 16Gb Fibre Channel and the coming 32Gb standard. For daily operations, this is usually fine, but if you have several high demand systems on this chassis a database, development systems, this configuration can behave like a bottleneck. On an 8Gb switch, this is eight hosts for each 8Gb port. We now have 8 entrance points for 64 hosts to communicate with storage, backup, virtual devices, etc. Sixteen 8Gb ports are funneled into an embedded switch with eight 8Gb external ports. Add Virtual Machines to each blade, let's say 4 VMs per blade, and what do we end up with?Ħ4 initiators share sixteen 8Gb ports. The oversubscription rate can get quite high if you use a hypervisor for your discrete servers. These will log into a core switch passing frames over to storage. This switch takes these 16 servers and performs a form of NAT, forwarding all of their traffic to a lesser number of ports commonly 4, and as many as 16 ports. Each of these servers connects to an internal HBA which connects to the embedded switch. UCS connects to an additional switch/bridge, a "Fabric Interconnect" and then to a core switch.Įach one of these steps increases oversubscription.įor example, a bladed chassis might have 16 discrete servers. All but UCS use a type of "dumb" switch (no zoning) which connects to a core fabric switch (this is true for FC and iSCSI). These systems commonly have a number of bladed servers that connect to an embedded switch over a copper bus. Many of our customers use a CPU chassis such as a Cisco UCS or a HP c7000.