Red paper IBM System Storage N series: An iSCSI Performance Overview

advertisement
Redpaper
Alex Osuna
Gary R Nunn
Toby Creek
IBM System Storage N series: An
iSCSI Performance Overview
The iSCSI protocol has enabled information technology organizations to lower
the cost of their storage area network (SAN) deployments. Cost is of little
concern if the solution does not perform to expectations. This IBM® Redpaper
examines some of the issues and options that are available with the IBM N series
storage product range with an aim to improve performance in an iSCSI
environment.
Since the iSCSI protocol was ratified in February of 2003, the ecosystem of
products and solutions has grown at a steady pace. Deployments of iSCSI have
grown with a more exponential curve. Significant growth in a technology attracts
the attention of enterprise customers, who watch the market very closely for
emerging technologies that offer them reduced cost, easier integration, and
better service levels.
Performance is a major concern for these customers. Enterprise customers
spend significant amounts of money on a variety of different projects, so it is
imperative that they get the maximum value from any deployed solution.
Maximum value is usually achieved by increasing the utilization of their
investment. Performance enhancement of any solution means that they can
increase utilization with the aim of deploying less hardware and, therefore,
minimizing the capital investment that is required.
© Copyright IBM Corp. 2007. All rights reserved.
ibm.com/redbooks
1
This IBM Redpaper explores the options and issues that you must consider when
building a high-performance solution. It makes specific technical
recommendations for optimizing performance with IBM N series storage
systems.
General network design recommendations
The foundation of a performance-oriented iSCSI solution is the storage network.
An iSCSI storage network should consist of enterprise-class Gigabit Ethernet
switches that support advanced networking features such as link aggregation,
jumbo frames, and VLANs. The switches that you consider need to be able to
support full-wire speed on their ports. Consequently, the switches backplane
needs sufficient bandwidth to support the anticipated peak traffic. For
manageability and security, the storage network should be segmented from any
front-end application or user networks, either by employing a VLAN or by using a
separate switch chassis, as illustrated in Figure 1.
(i) Single Switch Chassis
Hosts
1
Hosts
4
IP Switch
VLAN
1
VLAN
2
Hosts
5
Hosts
2
(ii) Multiple Switch Chassis
Hosts
1
VLAN
1
Switch
Link
Figure 1 Example of basic VLANs
IBM System Storage N series: An iSCSI Performance Overview
IP Switches
VLAN
2
Hosts
5
Hosts
2
Hosts
3
2
Hosts
4
Hosts
3
To simplify the fabric and to avoid network congestion, you need to configure all
switch and host ports in the SAN for the highest-speed full-duplex operation and
to override any auto negotiation functionality, providing that your installed
operating system supports this full-duplex. Full-duplex operation allows the
switch and host to exchange data bidirectionally at the same time, as compared
to half-duplex operation, which requires that transmission occur in only one
direction at a time. In half-duplex operation, simultaneous transmission is termed
a collision because the packets are discarded and must be retransmitted.
Attention: Half-duplex communication is required when the physical medium
lacks enough wires or paths to accommodate bidirectional signaling, such as
coaxial cable or when non-intelligent network equipment is used. Neither of
these conditions should exist in a modern IP network that is designed to carry
storage traffic.
Initiator and target technologies
A storage network consists of two types of equipment, initiators and targets, as
illustrated in Figure 2. Initiators are data consumers, such as hosts. Targets are
data providers, such as disk arrays or tape libraries. Initiators and targets,
collectively referred to as endpoints, can be software, software with hardware
assistance, or hardware. This section examines the features and issues with
each of these technologies.
D is k
S to ra g e
H o s ts (In itia to rs )
S A N /IP
C lo u d
S to ra g e (Ta rg e ts )
Ta p e
S to ra g e
Figure 2 Initiators and targets
IBM System Storage N series: An iSCSI Performance Overview
3
Software-only solutions
Software initiators and targets are virtual SCSI adapters that are written as part
of the operating environment. They use the host’s CPU resources and network
adapters to transfer data. Software endpoints are easy to deploy and are either
low-cost or free with the host operating system. The IBM N series software iSCSI
target is provided free of charge with the purchase of any other protocol license.
Software implementations can drive higher throughput than other
implementations if sufficient host CPU resources are available. This higher
throughput is especially true of cases where smaller block sizes are used.
Integration with the host operating system is usually very good, using existing
management tools and interfaces. Starting up a host from an iSCSI device is not
possible using software initiators unless a pre-startup execution environment
exists. At a minimum, a DHCP server and some kind of file transfer protocol such
as TFTP are required. Figure 3 provides a comparison chart of the iSCSI
endpoint technologies.
Figure 3 iSCSI endpoint technology comparison chart
Software with hardware assistance
Hardware assistance in the context of an iSCSI software endpoint generally
comes in the form of a TCP Offload Engine (TOE). With a TOE, the TCP stack
processing, including framing, reordering of packets, check sums, and similar
functions, are offloaded to a dedicated card with its own network interface port.
The TOE card can be a general-purpose card that is able to offload TCP traffic,
or it can be restricted to just accelerating iSCSI traffic.
TOE adapters enjoy most of the benefits of software initiators with additional host
CPU offload. TOE adapters can also support advanced networking features such
as link aggregation. Because the software initiator is still used on the host,
4
IBM System Storage N series: An iSCSI Performance Overview
integration with layered management applications is unaffected by the addition of
the TOE hardware.
Hardware-only solutions
Hardware adapters offload the TCP stack processing as well as the iSCSI
command processing functions. The hardware adapter looks and functions as a
SCSI disk interface, just as a Fibre Channel HBA does. The operating system
has no knowledge of the underlying networking technology or interfaces. A
separate management interface is used to configure the card’s networking
parameters.
Hardware-only solutions offload the largest amount of processing from the host
CPU. Because they function as SCSI adapters, it is possible to start up from
them if they provide the appropriate host BIOS interfaces and are recognized as
a startup device. Advanced networking features might not be available because
of software visibility to the network functions in the card.
Jumbo frames
Jumbo frame is a term applied to an Ethernet frame that carries more than the
standard 1500-byte data payload. The most commonly quoted size for a jumbo
frame is 9000 bytes, which is large enough for 8 KB application data plus some
amount of upper-layer protocol overhead.
Jumbo frames can improve performance in two ways:
1. Packet assembly or disassembly in high-throughput environments can be an
intensive operation. A jumbo frame decreases the amount of packet
processing operations by up to a factor of six.
2. The overhead that is associated with the Ethernet packet when prepared for
transmission is a smaller percentage of a jumbo frame than a regular sized
frame.
Jumbo frames require the endpoints and all devices between them in the
network to be configured to accept the larger packet size if they are not
configured for them by default, including any network switching equipment.
To configure an IBM N series storage system for jumbo frames, you can use two
methods:
򐂰 You can use the command line using CLI access.
򐂰 You can perform the changes using FilerView®.
IBM System Storage N series: An iSCSI Performance Overview
5
Example 1 shows how to check and set the IBM N series MTU parameter using
CLI access.
Example 1 To check and set IBM N series MTU parameter using CLI access
to check current setting enter ‘ifconfig interface’
itsotuc3> ifconfig e0a
e0a: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 1500
inet 192.168.3.237 netmask 0xffffff00 broadcast 192.168.3.255
partner inet 192.168.3.238 (not in use)
ether 00:a0:98:04:59:62 (auto-100tx-fd-up) flowcontrol full
enter ifconfig interface mtusize 9000
itsotuc3> ifconfig e0a mtusize 9000
itsotuc3> Thu Feb 15 12:16:10 MST [itsotuc3: 10/100/1000-VI/e0a:info]: Ethernet e0a:
Link being reconfigured
Thu Feb 15 12:16:14 MST [itsotuc3: 10/100/1000-VI/e0a:info]: Ethernet e0a: Link up
to confirm change enter ‘ifconfig interface’
itsotuc3*> ifconfig e0a
e0a: flags=948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM> mtu 9000
inet 9.11.218.237 netmask 0xffffff00 broadcast 9.11.218.255
inet 192.168.3.217 netmask 0xffffff00 broadcast 192.168.3.255
partner inet 192.168.3.238 (not in use)
ether 00:a0:98:04:59:62 (auto-100tx-fd-up) flowcontrol full
Note: If the telnet session in on the interface that is being altered, then the
port is reset, and the telnet session is disconnected. Restart the telnet session
and confirm that the change has occurred.
Attention: If you make the changes using the command line, you need to add
the value for the maximum transmission unit size as configured to the /etc/rc
file to be made persistent. You can also configure jumbo frame support by
running the setup command again.
6
IBM System Storage N series: An iSCSI Performance Overview
To to perform the changes using FilerView, follow these steps:
1. Access FilerView by entering the filer IP address into the browser address
bar:
http://servername or IP address/na_admin
2. Click the FilerView icon. A new browser window opens (Figure 4).
Figure 4 Available Network options
IBM System Storage N series: An iSCSI Performance Overview
7
3. Click Network to expand and display the available options under Network
(Figure 5).
Figure 5 Modifying NIC parameters
8
IBM System Storage N series: An iSCSI Performance Overview
4. Click Manage Interfaces and then modify the NIC’s parameters that you want
to alter (Figure 6).
Figure 6 MTU adjustment
5. Alter the MTU size to 9000 when using Jumbo frames (as suggested
previously).
6. Click Apply to submit the change. There is a short delay. The browser returns
to the Manage Network Interface view as seen.
Note: If the telnet session in on the interface that is altered, then the port is
reset, and the telnet session is disconnected. Restart the telnet session and
confirm that the change has occurred.
IBM System Storage N series: An iSCSI Performance Overview
9
Tip: When you make this configuration change from the FilerView, the /etc/rc
file is updated automatically, thus avoiding users having to edit the /etc/rc file.
You need to add the value for the maximum transmission unit size as configured
to the /etc/rc file to be made persistent. You can also configure jumbo frame
support by running the setup command again. Example 2 shows the output of
the /etc/rc file.
Example 2 Example output of the /etc/rc file
itsotuc3> rdfile /etc/rc
#Regenerated by registry Thu Feb 15 14:46:16 MST 2007
#Auto-generated by setup Thu Feb 15 10:21:17 MST 2007
hostname itsotuc3
ifconfig e0a `hostname`-e0a netmask 255.255.255.0 mediatype auto mtusize 9000
flowcontrol full partner 192.168.3.238
ifconfig e0b `hostname`-e0b mediatype auto flowcontrol full netmask 255.255.255.0
route add default 9.11.218.1 1
routed on
options dns.domainname itso.tucson
options dns.enable on
options nis.enable off
savecore
itsotuc3>
You can use larger frame sizes, but the 32-bit Ethernet CRC mechanism that is
used to generate the 4-byte Frame Check Sequence (FCS) at the end of the
frame begins to lose its effectiveness at around 12 000 bytes.
Link aggregation
Link aggregation is the technique of taking several distinct Ethernet links and
making them appear as a single link. Traffic is directed to one of the links in the
group using a distribution algorithm. Availability is also enhanced, because each
of the schemes that we present in this section can tolerate path failure with
minimal service disruption. This technology is referred to by many names,
including channel bonding, teaming, and trunking. The term trunking is
technically incorrect, because this term is used to describe VLAN packet tagging
as specified in IEEE 802.1Q.
The link aggregation technologies that we examine here are IEEE
802.3ad/EtherChannel, active-active multipathing, and iSCSI multiconnection
sessions.
10
IBM System Storage N series: An iSCSI Performance Overview
IEEE 802.3ad link aggregation control protocol (LACP) and
EtherChannel
IEEE standard 802.3ad and EtherChannel both specify a method by which
multiple Ethernet links are aggregated to form a single network link. Packet traffic
is distributed among the links using a hash function based on a variety of
parameters, including source and destination MAC addresses. The paths are
coalesced at the NIC driver layer or the operating system interface layer. A single
IP address is assigned to the set of links on both the initiator and target. Figure 7
illustrates a link aggregated enhancement.
Disk Class Driver
SCSI layer
iSCSI Initiator
TCP/IP
NIC Driver
NIC
NIC
Figure 7 iSCSI architectures bandwidth enhancement - link aggregation
Because the hash function used always returns the same value for a given
initiator/target pair, the traffic distribution across an aggregated link might not be
uniformly distributed. Link aggregation is best used to increase fan-in bandwidth
to a target rather than a large pipe between a single initiator and target pair. A
round-robin algorithm would correct this problem, but it is not a valid algorithm
according to the 802.3ad standard. EtherChannel and other nonstandard
mechanisms do allow for the use of round-robin distribution.
IBM System Storage N series: An iSCSI Performance Overview
11
Active-Active multipathing
Active-Active multipathing requires a software multipathing layer to be inserted
in the software stack above the SCSI layer. This layer of software takes the
multiple device paths presented from the SCSI layer and coalesces them into
single device paths, though there are multiple links to each device (Figure 8).
This is accomplished by using SCSI inquiry commands to check for commonality
in device paths, usually the LUN serial number. This abstraction prevents the
operating system from trying to access multiple device paths as though they are
individual devices, which would most likely result in data corruption.
Disk Class Driver
Multi-Pathing Layer
SCSI layer
iSCSI Initiator
iSCSI Initiator
TCP/IP
TCP/IP
NIC Driver
NIC Driver
NIC
NIC
Figure 8 SCSI Architectures bandwidth enhancement - Active-Active multipathing
Active-Active multipathing can enhance the available bandwidth between an
initiator and target, but this enhancement depends upon the implementation of
the load balancing algorithm. Because SCSI does not have a robust sequencing
mechanism, SCSI exchanges must always be delivered in sequence, which
generally means they must use a single path for the exchange. When the
command frame is placed into an IP packet, ordering is guaranteed, but the
software must properly handle command frames before encapsulation takes
place.
Active-Active multipathing is most useful for situations where multiple HBAs are
used.
12
IBM System Storage N series: An iSCSI Performance Overview
Multiconnection Sessions
Multiconnection sessions, or MCS, work just as the name implies—for each
iSCSI session, multiple connections are created (Figure 9). The number of
allowed connections is negotiated during login and session creation. While it is
possible to create multiple connections over a single physical interface,
bandwidth enhancement requires that multiple physical interfaces be employed.
Disk Class Driver
SCSI layer
iSCSI Initiator
TCP/IP
TCP/IP
NIC Driver
NIC Driver
NIC
NIC
Figure 9 SCSI architectures bandwidth enhancement - multi-connection session
MCS must be implemented in both the initiator and target. Distribution of iSCSI
traffic over the connections is not defined and therefore will be only as effective
as the algorithm implemented.
Performance impact of iSCSI data integrity features
The iSCSI standard specifies additional features to detect data corruption.
Digests supplement the TCP header checksums and Ethernet frame check
sequence that were not considered robust enough for use with storage networks
when the iSCSI standard was being developed. They are designed to detect bit
errors and to take action based on the error recovery level (ERL) that is
negotiated between the initiator and target at login time. The usefulness of
digests when the ERL with random accessible storage is minimal.
Header digests are relatively inexpensive operations, because the 32-bit CRC
algorithm needs to process only the 48-byte header of the iSCSI Protocol Data
Unit (PDU). The performance impact of enabling header digests is in the order of
10% reduction in throughput versus no digest.
When compared to header digests, computing the data digest can be a much
more expensive operation. The data segment of an iSCSI PDU can be
significantly longer than the header portion of the PDU, resulting in more
computation time during packet construction and is likely to result in more
IBM System Storage N series: An iSCSI Performance Overview
13
expensive loss in IO throughput than headerdigests. How much of a
performance loss is factored by the size of the underlying SCSI command frame.
Performance impact of IPSec signing and encryption
For most deployments, if iSCSI storage networks are implemented on a non
routed VLAN separate from front-end traffic, additional security measures above
LUN masking and Challenge-Handshake Authentication Protocol (CHAP)
authentication are not usually needed. Extremely security conscious
environments may opt to encrypt all of their storage network traffic. Since iSCSI
is an IP-based protocol, all of the security features available to IP networks in
general are applicable to IP storage networks.
IPSec is currently the most widely deployed encryption mechanism for IP
networks. IPSec can utilize a number of different encryption algorithms, including
the Data Encryption Standard (DES) and the Advanced Encryption Standard
(AES). IPSec also has an Authentication Header (AH) mechanism that uses a
hash function to ensure that static portions of the packet header are not
manipulated in transit. IPSec uniquely identifies each party in an exchange with a
certificate or private key issued by a Certificate Authority (CA) as well as a public
key provided by the Public Key Infrastructure (PKI).
The encryption process of IPSec applied to iSCSI is a very computational
expensive process. When IPSec is implemented in software on the initiator and
target, the overhead of IPSec processing can reduce throughput by as much as
a factor of 10. Hardware assistance is strongly recommended for sites that wish
to protect their traffic in-transit, but a performance penalty will still exist over non
encrypted traffic. Devices capable of wire-speed encryption for a Gigabit
Ethernet network are currently prohibitively expensive for most environments.
14
IBM System Storage N series: An iSCSI Performance Overview
Tuning IBM N series iSCSI target settings
The iSCSI software target has a minimum of tunable parameters. The most
important of these parameters is iscsi.iswt.max_ios_per_session, which is
shown in Example 3. This setting has a default value that ranges from 32 to 128,
based on the filer platform, and is configured up to a maximum value of 256. In
certain instances, increasing the value can increase throughput if the storage
processor is not fully utilized.
Example 3 Setting and verifying the iscsi .max_ios_per_session option
to check current setting type ‘options isci.iswt.max_ios_per_session’ &
press Enter
itsotuc3> options iscsi.iswt.max_ios_per_session
iscsi.iswt.max_ios_per_session 64
to change value type
itsotuc3> options iscsi.iswt.max_ios_per_session 128
to verify change
itsotuc3> options iscsi.iswt.max_ios_per_session
iscsi.iswt.max_ios_per_session 128
Another performance setting to consider is iscsi.iswt.tcp_window_size, which
is shown in Example 4. Setting this effects the performance for the iSCSI
software target ISWT driver. This will define the filers receive TCP window size
for all iSCSI connections involving this driver. For best performance it is
suggested that the value of this option is set according to the users network
configuration.
Adjusting this option beyond the default setting may improve performance when
using certain iSCSI initiators. You need to take the latency of the underlying
network into consideration.
Example 4 Setting and verifying the iscsi.tcp_window_size option
to check current setting type ‘options isci.tcp_window_size’ & press
Enter
itsotuc3> options iscsi.tcp_window_size
iscsi.tcp_window_size
131400
to change
itsotuc3> options iscsi.tcp_window_size 132400
to verify change
itsotuc3> options iscsi.tcp_window_size
iscsi.tcp_window_size
132400
IBM System Storage N series: An iSCSI Performance Overview
15
Note: Changing iscsi.iswt.max_ios_per_session affects only new iSCSI
sessions. Initiators that are already connected are not affected unless they
start a new session.
Setting a lower value can also be useful in environments where iSCSI and other
protocols are used simultaneously (for example, in a situation where a company
uses a IBM N series storage system with Fibre Channel attached hosts for its
production environment and iSCSI connected hosts for development). Lowering
the maximum I/Os per session throttles the iSCSI traffic, preserving low
response times for the production environment.
16
IBM System Storage N series: An iSCSI Performance Overview
Attention: ONTAP Version 7.1 introduces two new options regarding the
parameters that we discuss in this paper:
򐂰 The iscsi.max_ios_per_session parameter sets a limit on the maximum
number of simultaneous I/O requests that can be outstanding on a single
iSCSI session.
򐂰 The iscsi.tcp_window_size parameter sets the TCP receive window size
that the filer advertises for iSCSI connections.
In previous versions of ONTAP, the options were different. The end result
should be the same. Current and previous options include:
ONTAP 7.1 option
iscsi.max_ios_per_session
iscsi.tcp_window_size
Option prior to 7.1
iscsi.iswt.max_ios_per_session
iscsi.iswt.tcp_window_size
The option prior to 7.1 was retained for backward compatibility. Unfortunately
during an upgrade to ONTAP 7.1, any value changed under the old name is
preserved and is accessible using the new option name. This backward
compatibility introduces an issue. If the options are altered using the ONTAP
7.1 versions, then the value is not persisted across reboots because the
options prior to ONTAP 7.1 are retained in the filers registry, which is restored
on reboot.
The workaround for this issue is to force the registry entry's to be consistent
using both the current and the new options. For example, to set the
max_ios_per_session to 48, use the following two commands:
itsotuc3> options iscsi.max_ios_per_session 48
itsotuc3> options iscsi.iswt.max_ios_per_session 48
To set the TCP receive window size to 91980, use these commands:
itsotuc3> options iscsi.tcp_window_size 91980
itsotuc3> options iscsi.iswt.tcp_window_size 91980
IBM System Storage N series: An iSCSI Performance Overview
17
Conclusion
Since its standardization, the number of performance solutions that are available
for iSCSI has grown as rapidly as the deployment of iSCSI SANs. As the leader
in iSCSI storage, Data ONTAP® has been verified for interoperability and
performance with many vendors who offer iSCSI products and solutions. The
effectiveness of the solution depends upon the individual products that you
chose, their features, and the level of testing that you do with IBM N series
storage systems. Specific performance expertise might be required during the
design and implementation of the storage solution.
Additional resources
You can find additional information from the following resources:
򐂰 Support for iSCSI Host Support Kits
http://www-304.ibm.com/jct01004c/systems/support/supportsite.wss/sup
portresources?taskind=1&brandind=5000029&familyind=5329817
򐂰 Support for IBM System Storage™ and TotalStorage® products
http://www-304.ibm.com/jct01004c/systems/support/supportsite.wss/sto
rageselectproduct?brandind=5000029&familyind=0&continue.x=14&continu
e.y=11&oldfamily=0I
򐂰 IBM System Storage N series MPIO Support—Frequently Asked Questions,
REDP-4213
http://www.redbooks.ibm.com/abstracts/redp4213.html?Open
򐂰 IBM System Storage N series Data ONTAP 7.2 Block Access Management
Guide for iSCSI and FCP
http://www-1.ibm.com/support/docview.wss?uid=ssg1S7001526&aid=1
򐂰 Further information about IEEE standards.
http://www.ieee.org/portal/site/iportals
򐂰 IBM BladeCenter iSCSI SAN Solution, REDP-4153
http://www.redbooks.ibm.com/redpapers/pdfs/redp4153.pdf
18
IBM System Storage N series: An iSCSI Performance Overview
The team that wrote this paper
Alex Osuna is a Project Leader at the International Technical Support
Organization (ITSO), Tucson Arizona Center. He writes extensively and teaches
IBM classes about all areas of storage. He has more than 28 years in the IT
industry working for U. S. Air Force, IBM, and Tivoli® in computer maintenance,
field engineering, service planning, Washington Systems Center, business and
product planning, advanced technical support, and systems engineering,
focusing on storage hardware and software. He has more than 10 certifications
from IBM, Microsoft®, and Red Hat.
Gary R Nunn is a SAN support specialist based in Bristol, U. K. His role includes
support of the IBM DS4000™ products, IBM NAS, SVC and SAN switches
across a broad range of customer environments. He has been in this role for
almost four years. His background is electronics, where he served 10 years with
the British Army specializing in radar support and repair. After leaving the British
Army, he then worked as a Support Engineer for wireless networks for three
years before moving across to hardware support for the Sequent® UNIX® based
systems NUMA and Symmetry and associated SAN attached storage. He
performed this role until 2003, when he then moved to his current position.
Toby Creek is employed by Network Appliance™.
IBM System Storage N series: An iSCSI Performance Overview
19
20
IBM System Storage N series: An iSCSI Performance Overview
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to IBM for the purposes of
developing, using, marketing, or distributing application programs conforming to IBM's application
programming interfaces.
© Copyright International Business Machines Corporation 2007. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by
GSA ADP Schedule Contract with IBM Corp.
21
This document REDP-4271-00 was created or updated on April 12, 2007.
®
Send us your comments in one of the following ways:
򐂰 Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
򐂰 Send your comments in an email to:
redbook@us.ibm.com
򐂰 Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099, 2455 South Road
Poughkeepsie, NY 12601-5400 U.S.A.
Redpaper ™
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Redbooks (logo)
BladeCenter®
DS4000™
®
IBM®
Sequent®
System Storage™
Tivoli®
TotalStorage®
The following terms are trademarks of other companies:
Network Appliance, FilerView, Data ONTAP, and the Network Appliance logo are trademarks or registered
trademarks of Network Appliance, Inc. in the U.S. and other countries.
Microsoft, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
22
IBM System Storage N series: An iSCSI Performance Overview
Download