Searchstorage - Co.Uk: Storage

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

Managing the information that drives the enterprise

STORAGE
SEARCHSTORAGE.CO.UK

essential guide to

SAN Implementation
Learn all about storage area networks, from the basics all the way to the intricacies of Fibre Channel for a virtual server environment.
INSIDE 1

Shared storage key for virtual servers SAN basics for SMBs SAN implementation Avoiding storage bottlenecks SAN switch configuration FC SANs for virtual server environments iSCSI for virtual server environments

Memorising RAID level definitions and knowing which level does what can be:
Confusing Hard to Remember Useful All of the above

So how much do you think you know about RAID? Find Out For Yourself Read this Essential Guidethen Test Your Knowledge with Our Exclusive RAID Quiz! And dont forget to bookmark this page for future RAID-level reference.

Test your knowledge at SearchStorage.co.UK/RAID_Quiz

The Webs best storage-specic information resource for IT professionals in the UK

editorial | antony adshead

SANs key for virtual servers

SAN basics for SMBs

SAN deployment

Shared storage becomes key as virtual servers proliferate


If you havent needed a SAN up until now, thats likely to change, with shared storage enabling the full range of capabilities in a server virtualisation environment.

ERVER VIRTUALISATION HAS changed a lot of things in the data centre, and it has

Avoiding bottlenecks

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

certainly had quite an effect on storage. On the one hand, it has created greater demands on storage capacity. As development and application teams have gained the freedom to develop new servers at will, so new virtual machines have been created ever more frequently. On the other hand, the proliferation of virtual machines and their inherent flexibility have made providing storage in a virtual server world more complex than it used to be. Once upon a time, servers were not born so frequently and one server meant one app. So, you provisioned the storage to a server and that was pretty much how it stayed. Thats not the case anymore. Numerous virtual servers on one physical machine can mean throughput bottlenecks, which can affect application performance. And the movement of VMs from one physical machine to another means storage provisioning needs to keep track of such shifts. At the same time, the demands of server virtualisation have made shared storage an almost essential requirement. OK, so its not absolutely essential, but the flexibility inherent in a shared pool of storage makes life a lot easier in the expanding, fluid world of virtual servers. Consequently, while virtualised servers have become almost ubiquitous, so has shared storage, and the subject of SAN implementation in virtual server environments figures heavily in this Essential Guide. In it you will find the basics of SAN array deployment, such as determining business drivers, sizing SAN requirements, planning for service levels and backup, and developing standard naming and operating procedures. There is also solid information on deploying storage networking, with advice on initial switch configuration, which topology to choose, zoning and masking, and the basic facts on fan-in and fan-out ratios.

Copyright 2011, TechTarget. No part of this publication may be transmitted or reproduced in any form, or by any means, without permission in writing from the publisher. For permissions or reprint information, please contact Mike Kelly, VP and Group Publisher (mkelly@storagemagazine.com).

STORAGE

SANs key for virtual servers

SAN basics for SMBs

Beyond these fundamentals there are also articles on deploying iSCSI and Fibre Channel SANs in virtual server environments. We cover the advantages of Fibre Channel, such as its performance and security, but also discuss drawbacks such as its cost and complexity. In the article on iSCSI for virtual server environments, youll find not just a run-through of its benefits and drawbacks but also some good tips about whether to use software initiators inside virtual machines, being aware of potentially wiping iSCSI SAN provisioning data during server rebuilds, servers pointing to LUNs that they shouldnt, and why you should segregate iSCSI traffic on a separate VLAN. Assuming youve gone through the implementation process, we also have a piece on SAN performance tuning that covers configuration of inter-switch links, tweaking bandwidth and traffic flow, monitoring specific VM demands, and setting NPIVs and HBA queue depths. Theres no doubt server virtualisation has made storage more complex. Luckily, youll find all the basics here to get you started in getting on top of it. 2
Antony Adshead is the bureau chief for SearchStorage.co.UK.

SAN deployment

Avoiding bottlenecks

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

SearchStorage.co.UK Essential Guide to SAN Implementation

SANs key for virtual servers

you need to know before you buy a SAN

SAN basics

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

t
SAN

Before you buy a SAN, be sure to review these storage area network basics first. BY PIERRE DORION

HERE ARE MANY

reasons why you may want to buy a SAN. Some benefits of a include the following: A SAN allows you to pool disk storage resources and allocate to systems only as needed instead of allocating entire disks to systems. With a SAN, storage is shared at the block level instead of at the file or directory level. This means that individual systems can format the disk space as needed or use raw volumes. Some applications such as Exchange 2010 are only supported on block-level storage. Companies may want to use diskless systems (such as blades) to boot from the SAN.

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

STORAGE AREA NETWORK BASICS: WHAT IS A SAN?


The first storage area network basic concept you need to know is the general definition of a SAN. A SAN is shared block-level storage made available through a dedicated network. This is in contrast with networkattached storage (NAS), which is file-level storage accessed A SAN is shared blockvia a regular IP network. It is level storage made often confusing because both are accessed via a networkthe available through a important difference is that a dedicated network. SAN is exclusive to block-level storage. This means that disk storage is seen by a server as if it were local and not on a network. Also worth remembering is that a storage array by itself is not a SAN; a SAN is the combination of the shared storage array, the network and other devices as a whole connecting it to the servers or hosts.

SANs key for virtual servers

SAN basics for SMBs

SAN deployment

FIBRE CHANNEL VS. iSCSI SANs


There are two main types of SANs available: Fibre Channel (FC) and iSCSI, which are essentially protocols used to access block-level storage. Early SAN offerings focused on FC. When FC SANs were introduced, they mostly appealed to enterprise-class IT organisations because of their higher costan FC network requires special adapters called host bus adapters (HBAs), FC cables and FC switches. iSCSI came after Fibre Channel and offers a much more affordable alternative to FC because it can leverage a regular IP network to connect the servers to the storage array. iSCSI SANs still require special adapters, or initiators, to convert SCSI protocol to IP but do not require special network components or cabling. In fact, its possible to use a regular network card (NIC) in conjunction with an iSCSI software initiator to implement an iSCSI SAN without special equipment. The only thing you would need to do this is a shared storage array that has iSCSI ports.

Avoiding bottlenecks

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

WHAT WORKS FOR SMBs?


Generally speaking, iSCSI SANs are more accessible to small- to mediumsized businesses (SMBs) than Fibre Channel SAN solutions. The main reason is the cost of equipment. There is also the question of acquiring

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

new skills to manage the Fibre Channel network when a FC switch is used. Because of the difficulties FC SANs can present to small businesses, iSCSI SANs are more appealing to SMBs. Besides the requirement of a storage array and special adapters or initiator software, you can deploy an iSCSI SAN using a conventional high bandwidth IP network while leveraging existing IP network skills.

BUYING YOUR FIRST SAN: A CHECKLIST


SANs key for virtual servers

SAN basics for SMBs

SAN deployment

Follow this high-level list of components your SMB would need to deploy an iSCSI SAN: A shared storage array with iSCSI ports. Storage management software. This will help you to create and assign storage shares. Keep in mind that storage management software may or may not come with the storage array. High-bandwidth dedicated IP network segment. While iSCSI can run on a shared network with other IP traffic, a dedicated network segment is recommended to avoid performance degradation. iSCSI hardware and software initiators (host adapters) for the servers that will share storage on the SAN.

Avoiding bottlenecks

OTHER SAN IMPLEMENTATION CONSIDERATIONS


When considering the implementation of a SAN, there are a few other storage area network basics that must be considered to ensure the solution will meet the objectives and requirements. Some of these elements include: Number of hosts supported. The storage array must be able to support the number of servers that will share storage. This is dictated by the number of LUNs that can be created. For example, an entry-level device that only supports 14 LUNs will limit the number of servers that can share storage to 14. Redundancy. Moving your data to centralised shared storage also means the risk of losing access to all data in the event of a storage array failure. Selection criteria must therefore include internal component redundancy and the ability to replace parts without requiring an outage (hot swappable). Data protection. Very much aligned with the previous bullet, data backup and data protection must also be considered. While the

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

solutions mentioned earlier also come with data protection features such as snapshot or point-in-time copy capabilities, a proper offsite data protection scheme is still required. These arrays support local or remote replication, but from an SMB perspective, the cost of a second storage array can be prohibitive and hard to justify. In such a case, a traditional data backup might still be required for offsite data protection. As with any other IT solution, developing an understanding of storage area network basics should come first, followed by research on products and features. 2
Pierre Dorion is the data centre practice director and a senior consultant with Long View Systems in Phoenix, Ariz. He specialises in business continuity and disaster recovery planning services and corporate data protection.
SAN basics for SMBs

SANs key for virtual servers

SAN deployment

Avoiding bottlenecks

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

SearchStorage.co.UK Essential Guide to SAN Implementation

SAN implementation: Key steps in SAN deployment


SANs key for virtual servers

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

A SAN implementation has a number of key steps. Learn about SAN deployment, SAN fabrics, how to size a SAN, how data is backed up on a SAN and questions surrounding SAN growth.

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

F YOURE DEPLOYING storage area network technology for the first time,

there are a number of key considerations and steps associated with a SAN implementation that you should follow. In this interview, SearchStorage.co.UK Bureau Chief Antony Adshead speaks with Steve Pinder, principal consultant at GlassHouse Technologies (UK), about completing the first SAN implementation in your data storage infrastructure. Pinder discusses SAN fabric topologies, the key planning stages of a SAN deployment and common pitfalls to watch out for to ensure a successful implementation.

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

SearchStorage.co.UK: What are the key stages when implementing a SAN for the first time? Pinder: Firstly, its critical to understand the business drivers behind the implementation of the SAN. Is the reason organic growth, or has there been a recent merger or acquisition? You must also take into account whether some or all of the current server estate is due to be refreshed, or whether it will be migrated to the new SAN. In most cases, the business drivers will cause the decision to implement a SAN, after which the following stages should take place. Understand what the SAN will contain. Will it be for mission-critical hosts only, or will there be a variety of hosts aligned to performance and capacity? This decision will help to size the SAN correctly, minimising waste A small-scale SAN with and future issues. If a large SAN is implemented and a small number one or two switches is not of hosts are attached, this could immensely complex, but be a waste of money. Whereas if a once you increase the smaller SAN than required is implenumber of ports beyond mented, it could run out of capacity quickly and cause a costly reconfig- a certain level its wise uration expense. to get external advice. Its also necessary to decide if Steve Pinder redundant fabrics are needed, how data on the SAN will be backed up and what classes of storage are needed. Each of these points needs to be dealt with so the correct hardware and software budgets can be signed off. The next stage of the process is to decide whether you want to implement the solution yourself or invite third parties to carry out some or all of the work for you. A small-scale SAN with one or two switches is not immensely complex, but once you increase the number of ports beyond a certain level its wise to get external advice. What also needs to be understood is the amount of time your IT department can devote to the project. If the implementation is complex, it may require large chunks of IT time, and this could be to the detriment of other key projects. Once roles have been decided, a tender process would normally take place. This involves inviting vendors to propose a solution to fulfill the requirements provided and give appropriate costs for the implementation. Responses can then be analysed and the most appropriate taken forward. Allocation of work between the customer and vendor will then take place and the installation will commence.

SANs key for virtual servers

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

10

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

The first phase will probably be a pilot, followed by a full-scale rollout. During the SAN installation, new hosts and services will be built and commissioned. For hosts and services that need transferring to the new SAN, the migration schedule will need to be created and the necessary downtime needs to be negotiated for the services they provide. Further hosts will be implemented according to the project plan until its completion and sign-off. Once the SAN is signed off to production, its important that standard procedures are in place for SAN activities. These procedures can range from outage alerts to storage provision, and they will ensure the SAN runs as smoothly as envisioned during the design process.
SANs key for virtual servers

SearchStorage.co.UK: What are the main pitfalls in a SAN installation to watch out for, and how can you ensure the process goes as smoothly as possible? Pinder: Unless your implementation is very small or you are an experienced SAN professional, its highly recommended to get some professional advice to help with your SAN rollout. This can range from a simple assessment to a full implementation service and will almost always prove to be good value for money and help prevent any pitfalls from becoming a reality. My advice would be to ensure you address the following five items:

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

SAN switch configuration

Examine whether the SAN is sized properly. Director-class switches are much more expensive than edge switches but can enable a SAN implementation of 1,000 or more ports. They can also be purchased with a limited number of ports initially, which can be increased at a later date. Edge switches are cheaper but are not expandable, and an increased port requirement in the future could lead to contention and delays along inter-switch links and badly aligned storage. Assign service levels to hosts. Not all hosts need the same service levels, and its a waste of money to assign low-priority hosts to enterprise-class storage. All hosts connecting to a SAN should be allocated to a class of service and should be attached to the infrastructure that provides that service level. For example, its pointless wasting an extra host bus adapter and SAN port dual-pathing a host when no one will notice if its down for a week. Examine whether the storage arrays have the required performance levels. All arrays have maximum capacity service levels, and its important to aggregate hosts across these arrays so that one limit isnt reached far before the other. If an array has reached its performance limit yet has many terabytes of free storage, this is obviously a waste of money. Ensure a backup plan is in place. When theres no SAN present, all backup

FC SANs for virtual servers

iSCSI for virtual servers

11

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

traffic will traverse the LAN, and as the amount of backup traffic increases, this can lead to contention between backup traffic and network data. Implementing technologies that allow backup traffic to traverse the storage network instead of the LAN can remove this potential bottleneck and increase performance of the LAN. Create standards and stick to them. Once a SAN grows beyond a certain point, its administration can become difficult. It is therefore imperative to agree on naming conventions during the design phase and to ensure they are adhered to. Zone names on the fabric, host aliases on the switches and host groups on arrays should all remain consistent. If standards are not implemented, administration becomes difficult, and its only a matter of time before an outage is caused. 2

SANs key for virtual servers

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

12

SearchStorage.co.UK Essential Guide to SAN Implementation

SANs key for virtual servers

SAN performance best practices:

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

More ways to avoid bottlenecks


BY GEORGE CRUMP

SAN switch configuration

There are several ways you can fine-tune and improve your storage area network (SAN). These tips cover topics such as using ISLs and understanding HBA queue depth to help you avoid storage bottlenecks.

FC SANs for virtual servers

WITH THESE FIVE TIPS, we take a look at how SAN performance and SAN efficiency
iSCSI for virtual servers

improve with transparency, testing and a better understanding of the impact your data storage has on the rest of your system.

Tip 1. Understand how youre using ISLs


Inter-switch links (ISLs) are critical areas for tuning and, as a SAN grows, they become increasingly important to performance. The art of fine-tuning an ISL is often an area where different vendors will have conflicting opinions on what a good rule of thumb is for switch fan-in configurations and the number of hops

13

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

SANs key for virtual servers

SAN basics for SMBs

between switches. The reality is that the latency between switch connections compared with the latency of mechanical hard drives is dramatically lower, even negligible; however, in high fan-in situations or where there are a lot of hops (servers crossing multiple switches to access data), ISLs play an important role. The top concern is to ensure that ISLs are configured at the correct bandwidth between the switches, which seems to be a surprisingly common mistake. Beyond that, its important to measure traffic flow between hosts and switches, and ISL traffic between switches. Switch reporting tools will provide much of this information, but you may prefer a visual tool that measures switch intercommunication. Based on traffic measurements, a determination can be made to rebalance traffic flow by adjusting which primary switch the server connects with, which will involve physical rewiring and potential server downtime. Another option is to add ISLs, which increases bandwidth but consumes ports and, to some extent, further adds to the complexity of the storage architecture.

Tip 2.
SAN deployment

Avoiding bottlenecks

SAN switch configuration

FC SANs for virtual servers

Use NPIV for virtual machines Server virtualisation has changed just about everything when configuring SANs, and one of the biggest challenges is to identify which virtual machines are demanding the most from the infrastructure. Before server virtualisation, a single server had a single application and communicated to the SAN through a single host bus adapter (HBA); now virtual hosts may have many servers trying to communicate with the storage infrastructure all through the same HBA. Its critical to be able to identify the virtual machines that need storage I/O performance the most so that they can be balanced across the hosts, instead of consuming all the resources of a single host. N_Port ID Virtualisation (NPIV) is a feature supported by some HBAs that lets you assign each virtual machine a virtual World Wide Name (WWN) that will stay associated with it, even through virtual machine migrations from host to host. With NPIV, you can use your switches statistics to identify the most active virtual machines from the point of view of storage and allocate them appropriately across the hosts in the environment.

iSCSI for virtual servers

Tip 3. Know thy HBA queue depth


HBA queue depth is the number of pending storage I/Os that are sent to the data storage infrastructure. When installing an HBA, most storage administrators simply use the default settings for the card, but the default HBA queue depth setting is typically too high. This can cause storage ports to become congested, leading to application performance issues. If queue depth is set too low, the ports and the SAN infrastructure itself

14

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

arent used efficiently. When a storage system isnt loaded with enough pending I/Os, it doesnt get the opportunity to use its cache; if essentially everything expires out of cache before it can be accessed, the majority of accesses will then be coming from disk. Most HBAs set the default queue depth between 32 to 256, but the optimal range is actually closer to 2 to 8. Most initiators can report on the number of pending requests in their queues at any given time, which allows you to strike a balance between too much and not enough queue depth.

Tip 4. Multipath verification


SANs key for virtual servers

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

Multipath verification involves ensuring that I/O traffic has been distributed across redundant paths. In many environments, experts said they found that multipathing isnt working at all or that the load isnt balanced across the available paths. For example, if you have one path carrying 80% of its capacity and the other path only 3%, it can affect availability if an HBA or its connection fails, or it can impact application performance. The goal should be to ensure that traffic is balanced fairly evenly across all available HBA ports and ISLs. You can use switch reports for multipath verification. To do this, run a report with the port WWNs, the port name and the Mbps sorted by the port name combined with a filter for an attached device type equal to server. This is a quick way to identify which links have balanced multipaths, which ones are currently acting as active/passive and which ones dont have an active redundant HBA.

SAN switch configuration

Tip 5. Improve replication and backup performance


While some environments have critical concerns over the performance of a database application, almost all of them need to decrease the amount of time it takes to perform backups or replication functions. Both of these processes are challenged by rapidly growing data sets that need to be replicated across relatively narrow bandwidth connections and ever-shrinking backup windows. Theyre also the most likely processes to put a continuous load across multiple segments within the SAN infrastructure. The backup server is the most likely candidate to receive data that has to hop across switches or zones to get to it. All of the above tips apply doubly to backup performance. Also consider adding extra HBAs to the backup server and have ports routed to specific switches within the environment to minimise ISL traffic. 2
George Crump is president and founder of Storage Switzerland, an IT analyst firm focused on the storage and virtualisation segments.

FC SANs for virtual servers

iSCSI for virtual servers

15

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

how to
SANs key for virtual servers SAN basics for SMBs SAN deployment Avoiding bottlenecks SAN switch configuration FC SANs for virtual servers

SAN switching:

iSCSI for virtual servers

configure a SAN switch


Learn how to configure a storage area network (SAN) switch, decide what fabric topology to implement, the importance of zoning and masking, and how to figure out fan-in and fan-out ratios.

O YOU KNOW how to configure a storage area network (SAN) switch? In this Q&A

from SearchStorage.co.UK, Bureau Chief Antony Adshead talks with Steve Pinder, principal consultant at GlassHouse Technologies (UK), about how to decide what fabric topology to implement, the importance of zoning and masking, and how to figure out fan-in and fan-out ratios.

SearchStorage.co.UK: What is involved in configuring a SAN switch? Pinder: The answer to this question depends on whether youre starting a new SAN fabric or adding a switch to an existing one. If youre starting a new fabric, the configuration of the switch is much easier. All switches have a default setup

16

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

SANs key for virtual servers

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

and IP address. Youll need to connect to the IP address using a browser or command line and carry out some changes on the switch so its configured correctly for the environment. For new switches, the only changes you must make are to configure the IP address, subnet mask and default gateway to allow you to connect to it via a browser or whatever transport you choose. All the settings will work for the new fabric. The first switch in the fabric is called the principal switch, and this I tend to configure switch holds the master database switches with static for fabric configuration. When other domain IDs so I can guarswitches are added to the fabric, they download that information antee a particular domain from the principal switch. All switchID will never be allocated es also have a domain ID, which can to two different switches. be statically configured or allocated from the principal switch. Steve Pinder I tend to configure switches with static domain IDs so I can guarantee a particular domain ID will never be allocated to two different switches. If two switches have been allocated the same ID, this could cause fabric segmentation, outages in the fabric and denial of service to logical unit numbers (LUNs). For best practice you should also ensure unused switch ports are disabled. This will prevent unauthorised devices logging into the fabric and causing disruption to traffic. This should be done following initial port testing on a switch youre about to add to the fabric but before you add it to the fabric.

SAN switch configuration

SearchStorage.co.UK: How do I decide what topology to implement? Pinder: Before starting on SAN topology its important to say that when redundancy is required, the known best practice is to implement two SAN fabrics and have known devices connected to both of them. This ensures that if a host is connected to both fabrics, it will still be able to operate effectively if theres a switch, HBA [host bus adapter] or even an entire fabric failure. In these answers Im going to assume that if redundancy is required, then two identical SAN fabrics will be implemented. There are a number of topologies that can be used when configuring a fabric, although there are three Id recommend, depending on the size of the fabric and number of switches. The single-switch fabric has one switch. Director-class switches can be purchased with hundreds of ports, although theyre expensive compared to low-capacity switches such as those with 32 ports. The single-switch SAN

FC SANs for virtual servers

iSCSI for virtual servers

17

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

SANs key for virtual servers

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

SAN switch configuration

offers the lowest-possible latency between the host and its associated storage as all devices are connected to the single switch. As most SANs grow over time, its more likely that an organisation with a small SANpossibly a single-switch SANwill add more switches as the number of devices grows. As you add devices this brings us to the second type of SAN, which is a mesh fabric. This type of fabric is one where every switch in the fabric is connected to every other switch in the fabric. The connections As most SANs grow over are via ISLs, or inter-switch links. In time, its more likely that this configuration the host will have to go through a maximum of one an organisation with a ISL to get to the storage it uses. small SANpossibly a When using a mesh configuration single-switch SANwill its favourable to group a host and its storage on the same switch so add more switches as the that a host will not have to traverse number of devices grows. an ISL to get to its storage. As the Steve Pinder mesh grows, the number of ISLs on a single switch grows at the rate of one for every additional switch. After a certain point theres little benefit to adding extra switches as many of the additional ports are required for ISLs. When you get to a large number of ports, this is where the third type of fabric comes in, which is called core-edge. This configuration uses a large switch at the core of the fabric to which you would generally attach storage. Hosts are attached to smaller edge switches, which are also attached to the core via ISLs. This topology can grow to hundreds or thousands of ports while ensuring hosts only have to traverse a maximum of two switches to access storage. Hosts that require very low latency or very high throughput can be connected to the core.

FC SANs for virtual servers

SearchStorage.co.UK: What is zoning and masking, and why is it important? Pinder: Zoning is a procedure that takes place on the SAN fabric and ensures devices can only communicate with those that they need to. Masking takes place on storage arrays and ensures that only particular World Wide Names [WWNs] can communicate with LUNs on that array. If the correct masking is applied to the storage array, then theres no absolute necessity to configure zoning on the SAN, although using zoning and masking is always to be recommended. There are two distinct methods of zoning that can be applied to a SAN: World Wide Name zoning and port zoning. WWN zoning groups a number of WWNs in a zone and allows them to communicate with each other. The switch port that each device is connected to is irrelevant when WWN zoning

iSCSI for virtual servers

18

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

SANs key for virtual servers

is configured. One advantage of this type of zoning is that when a port is suspected to be faulty, a device can be connected to another port without the need for fabric reconfiguration. A disadvantage is that if an HBA fails in a server, the fabric will need to be reconfigured for the host to reattach to its storage. WWN zoning is also sometimes called soft zoning. Port zoning groups particular ports on a switch or number of switches together, allowing any device connected to those ports to communicate with each other. An advantage of port zoning is that you dont need to reconfigure a zone when an HBA is changed. A disadvantage is that any device can be attached into the zone and communicate with any device in the zone. My opinion is that neither is particularly superior to the other, and what I find is that the type of zoning used is generally determined by what a particular consultant or organisation has done in the past.

SAN basics for SMBs

SearchStorage.co.UK: What do I need to know about fan-in and fan-out? Pinder: The fan-in ratio denotes the number of hosts connected to a port on a storage array. There are many methods that have been used to determine the optimum number of hosts connected to a storage port, but in my experience there are no hard-and-fast rules to determine an absolute number. My recommendation would always be to assess the throughput of each host you want to connect to a port, determine the maximum throughput of that port and add hosts such that the total throughput is slightly higher than the throughput of that port. Its very important, however, to ensure you have good utilisation statistics available to detect any time period where the port is heavily utilised and could be causing a bottleneck to your SAN fabric. There are a number of reasons why its difficult to give a host count as an optimum fan-out ratio. These include differing port speeds: a 4 Gbps port can obviously handle twice the throughput of a 2 Gbps port and will allow you to add roughly double the number of hosts; and multipathing: if a host has two HBAs, traffic will either be aggregated down those two HBAs in an active-active mode or all the traffic will go down one HBA and nothing down the other if the connection is active-passive. These scenarios will have a big impact on how many hosts you can add to a particular port. In normal operating circumstances, you can connect double the [amount of] HBAs to a particular port as they will all be doing half the work of the host. This is in a multipathing environment. If, however, theres an issue with the SAN and a device has failed over from its active port to its passive port, the remaining ports may be required to carry out twice the standard workload. This can cause poor performance if you oversubscribe hosts to storage ports. 2

SAN deployment

Avoiding bottlenecks

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

19

SearchStorage.co.UK Essential Guide to SAN Implementation

Implementing Fibre Channel SANs in a


SANs key for virtual servers

virtual server environment

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

SAN switch configuration

FC SANs for virtual servers

Eric Siebert discusses the pros and cons of implementing Fibre Channel SANs in a virtual server environment in this interview.

ibre Channel (FC) SANs are a popular choice for virtual server environments. They offer good performance and security, and since many people already have them implemented in their environment, they often stick with the same technology for virtual server environments. However, Fibre Channel SANs are not right for everyones virtual server platforms. They are expensive and require skilled staff to implement and administer them. In this Q&A interview, Eric Siebert, a VMware expert and author of two books on virtualisation, discusses Fibre Channel SANs in virtual server environments. Learn about the advantages and disadvantages of implementing Fibre Channel SANs to support your virtual server platform, what steps to take to set up a Fibre Channel SAN correctly and what you should know before you choose a Fibre Channel SAN.

iSCSI for virtual servers

SearchStorage.co.UK: Fibre Channel is a popular choice when it comes to virtual server environments. What are the benefits of using Fibre Channel SANs to support a virtual server platform? Siebert: Typically Fibre Channel is one of the best-performing and most secure of the storage technologies available today, so people go for it because they want its performance and the most possible I/Os they can get. Fibre Channel

20

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

SANs key for virtual servers

networks are also isolated in their own separate network environments, so they are potentially more secure than other types of storage like NAS and iSCSI SANs which use the LAN. Fibre Channel is commonly deployed in the enterprise storage architecture, so a lot of people already have Fibre Channel SANs they can leverage. So, in a lot of cases, rather than having to implement something from scratch, they already have a Fibre Channel SAN available that they can maybe expand on. Fibre Channel storage is block-level storage, but it can be used with other storage protocols like NFS for file-level storage. And if you want to do other things like boot from SAN, Fibre Channel SANs can do that. So, you dont have to put disks in the server; it can just boot directly from the SAN and run from a VM environment.

SAN basics for SMBs

SearchStorage.co.UK: What disadvantages or complications can you run into with Fibre Channel SANs for a virtual server environment? Siebert: There are two big ones: cost and complexity. Typically, Fibre Channel SANs are the most expensive option available, so if youre looking to implement one from scratch its really expensive because you have to buy expensive components that are made specifically for Fibre Channel: Fibre Channel cables, Fibre Channel host bus adapters (HBAs) for your servers, Fibre Channel switches and disk drives. The cost is high but you are paying for its performance. In terms of complexity, Fibre Channel SANs typically require specialised skill sets that your average server administrator wouldnt have.

SAN deployment

Avoiding bottlenecks

SAN switch configuration

SearchStorage.co.UK: How do you set up a Fibre Channel storage device to support virtual servers? What steps do you need to do to ensure everything runs smoothly? Siebert: Its pretty straightforward, but you just have to be aware of things like speeds. Fibre Channel now runs at up to 8 Gbps, but there is still a lot of 4 Gbps Fibre Channel equipment around, and it will only run at the speed of the lowest-spec component in the fabric. So, if you have a 2 Gbps or 1 Gbps card at the server, but your switches are 4 Gbps or 8 Gbps as is your back-end storage, its only going to operate at the lowest common speed denominator of those, so you want to make sure that you have all your components at the same speed to get the most value out of your environment. Also consider things like multipathing. You want to make sure you have redundancies, so typically you would implement a multipath to a SAN all the way from the server, where you would have two HBAs, through two switches and then typically two controllers on the array, so if any one component

FC SANs for virtual servers

iSCSI for virtual servers

21

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

breaks, you can always open a new path. If you implement multipathing and also use other technologies like active-active, where both paths are active, you can get more performance from it that way. Overall, setting up a Fibre Channel SAN can be complicated, so typically you should make sure everything is set up properly and is working properly. Proper preparation is key for proper configuration.

SearchStorage.co.UK: Are there any requirements you need to know about before choosing an Fibre Channel SAN? Does this storage option require more experienced administrators?
SANs key for virtual servers

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

Siebert: You need to know your requirements. If you have applications that need a specific amount of I/O, you need to know what that is to be able to size your SAN properly. You shouldnt go in there blind and assume that just because a Fibre Channel SAN is going to be fast that it will work for you. You really need to go out and do the homework. Do an assessment of your environment, figure out what your I/O requirements are, whether you have any type of redundancy requirements and then figure out capacity as well. You need to know how much storage you need for that Fibre Channel SAN to see if it has the capacity for all of your virtual machines. Other requirements to consider are features you need on the SAN, such as snapshot capabilities and replication. Also, if you have a disaster recovery strategy, make sure you include your needs for that because a lot of time you can be leveraging storage features to move data from your main site to an alternate disaster recovery site. So, you really need to assess your requirements and what kind of features and functionalities youre going to need before you go out and buy one. Also there are some new features for VMware, like the vStorage APIs for Array Integration [VAAI], that offload some storage tasks to the storage layer that traditionally the hypervisor would have done. This can take a load off the hypervisor and put it on the storage infrastructure, and that improves the performance of your virtual host. So, the main advice is to look around, evaluate, ask questions, get references, be sure of your requirements and go from there and make a decision, and hopefully you can find the right Fibre Channel solution that meets your needs. 2

22

SearchStorage.co.UK Essential Guide to SAN Implementation

Using iSCSI for virtual server environments


SANs key for virtual servers

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

In this Q&A interview, Mike Laverick discusses the pros and cons of using iSCSI for virtual server environments. Learn how to implement and configure iSCSI and more.

SING ISCSI FOR your virtual server environment has several big advantages. The

implementation of iSCSI storage devices is straightforward, and users get the benefits of a file system without the hassle of setting up Fibre Channel (FC). But iSCSI users can run into IP-related problems and face other issues, especially if they choose to use software-based iSCSI stacks. And many people think of iSCSI as the underdog to Fibre Channel. But it can work well in many virtual server environmentsusers simply need to be aware of the ins and outs of iSCSI, how it works, and what it can do for their environments. In this Q&A interview, Mike Laverick, a VMware expert, discusses the pros and cons of using iSCSI for virtual server environments. Find out the proper steps you should take when implementing iSCSI in your virtual environment, how to configure iSCSI in vSphere and how to test the performance of an iSCSI storage device.

23

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

SearchStorage.co.UK: What are the advantages of using iSCSI for virtual server environments? Laverick: I think the big advantage is that theoretically it should be much simpler in comparison to, say, Fibre Channel because there are no WWNs, zoning or special Fibre Channel switches that need to be acquired. Theres no complicated masking that needs to be done to present the storage, and what it means for people using virtual server environments is that they get all the benefits of the vendors file systems, such as VMwares VMFS, without the hassles associated with setting up Fibre Channel. I think the other big advantage is because iSCSI works both at the hypervisor level and in the guest operating system, there is scope for using an iSCSI initiator actually inside a virtual machine (VM), where you might actually find you get better performance for raw I/O to the LUN. But, more importantly, you might be able to break through some of the limits that are currently around VMwares hypervisor, where the maximum-sized RTM or the maximum virtual disk is still just 2 TB. If you, for example, run the iSCSI initiator inside a Windows VM, then the rules that govern the size of the partition are Microsofts GPT [GUID Partition Table] tables, which means you can go way beyond the 2 TB limit that a lot of people are restricted by. The last big advantage of iSCSI is that people tend to run their hypervisors in clusters, and they tend to keep one cluster separate from another for security reasons and for simplicity, but there are often files that you want to share between clusters such as templates. And with iSCSI just being IP, its much easier to selectively present ancillary LUNs that maybe hold templates to your whole range of clusters without them necessarily being duplicated.

SANs key for virtual servers

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

SAN switch configuration

SearchStorage.co.UK: What are the disadvantages of using iSCSI for virtual server environments? Laverick: I think we have to remember that its still IP; its still Ethernet-based; its still TCP [packets] that are going across the wire. So in my experience the problems you tend to have are IP-related. Its something that shouldnt happen but does and ranges from things like IP conflicts to bad addressing. People put the wrong subnet mask in, or they go across a router and they have the wrong default gateway entered. I think another disadvantage is that most customers use the softwarebased iSCSI stacks that exist either in VMware ESX, Citrix Xen or in Hyper-V. The problem with that is its in software, so if you end up wiping that server and rebuilding it, you have to put all that configuration back again. In fairness, what customers could do is use an iSCSI HBA [host bus adapter] from the likes of Cisco, but they arent cheap. And so, very often people use software-based

FC SANs for virtual servers

iSCSI for virtual servers

24

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

SANs key for virtual servers

SAN basics for SMBs

SAN deployment

initiators, and that can provide some hassles when you come to rebuilding an environment. Very often people look down on doing upgrades and say, Lets just wipe the system and rebuild. But when you do that, you have to put all your storage configuration back into the hypervisor to make it work, whereas when youre dealing with something like Fibre Channel, the Fibre Channel environment kind of exists externally from the stack of the hypervisor, so you can yank the system, put a new version in and the LUNs will still appear. I often get questions from customers asking me whether they should use Fibre Channel, iSCSI or NFS, and I often sit there and list all the advantages and disadvantages of all three, but I often feel like there isnt one that has a clear lead in every single case. The final disadvantage of iSCSI I would say is that you can get some weirdness going on with iSCSI when you do re-scans. For example, when you take a LUN away from a server and do a re-scan, Ive often found the LUN still there despite the fact that the server shouldnt have any rights or privileges to it, and its because of the way that TCP sessions are kept open to improve performance and are shared between multiple LUNs. Its actually quite a difficult thing to cleanly de-present a LUN. So you can have oddness with the LUN where you know the server doesnt have access to it, but it thinks it does, and it eventually gets cleared about an hour or two hours later as the TCP session is brought down or re-established.

Avoiding bottlenecks

SearchStorage.co.UK: What steps do you have to take to implement iSCSI in a virtual environment? Laverick: I think the implementation of iSCSI should be relatively straightforward because you should already have the pieces in place with the Ethernet network to actually allow the connectivity to take place. But to get The other nice-to-have is into a bit more detail, I think most 10 Gbps Ethernet throughcustomers will want to VLAN off their iSCSI communications into out from the hypervisor up a separate VLAN or even physical to the storage. switch if they can afford that. People Mike Laverick have to remember that with iSCSI TCP port 3260, no iSCSI communication is encrypted. Theres no security around iSCSI against lifting data off the wire. So you have to be pretty careful about making sure nobody can tap into that side of the network and literally just steal a packet capture of your data. The other nice-to-have is 10 Gbps Ethernet throughout from the hypervisor up to the storage. Yes, you can get away with 1 Gbps Ethernet interfaces, and

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

25

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

SANs key for virtual servers

SAN basics for SMBs

SAN deployment

of course you can do multipathing I/O with bundled 1 Gbps Ethernet interfaces. But theres a level of complexity there, which is perhaps a little bit undesirable. For a customer who is coming in new to iSCSI, its an ideal time to say, Is it time to move from 1 Gbps Ethernet to 10 Gbps Ethernet rather than doing that midway through the lifetime of that commitment? I guess the last thing about implementing iSCSI is [whether it has] its own name convention, something called the IQNthe iSCSI Qualified Name. So I think most companies would have to sit down and very briefly just think about what theyre going to use as their IQN, what standard are they going to use for that. And there are some conventions around the IQN you can follow, but there are some parts of it that you might want to make unique to your business. And the reason thats important is all about security, whether they can see a LUN or not see a LUN is down to the IQN in most cases. So if you establish a poor convention, its quite difficult or a pain in the butt, to have to go back and reestablish and standardise on the IQN. And going back to the disadvantages of iSCSI, whenever Ive had a problem with iSCSI in my own lab environment its because Ive messed up an entry on the IQN. Ive typed incorrectly. So its a fat finger syndrome that I think seems to apply more with IP storageiSCSI storage and NFSthan perhaps it does with Fibre Channel.

Avoiding bottlenecks

SearchStorage.co.UK: How do you configure iSCSI in vSpherecan you explain those steps? Laverick: In a nutshell the first thing you need to do is make sure that youve got a virtual switch with at least two network cards backing that virtual switch with whats called a VMkernel port groupliterally an IP address that will allow the ESX host to speak to the iSCSI system. And in ESX 4, a new configuration was introduced to allow improved multipathing to that storage so that you get a load on both network cards or more if you have them. That load balancing is less important if youre going down the 10 Gbps Ethernet route. Your multiple NICs then are really just there for redundancy as opposed to any load balancing. The next stage would be to enable the iSCSI stack in ESX, which, when you click properties and enable it, it will create an IQN for you, but I think most customers change that to fit with their standard. It creates a kind of alias of that device with the name VM HBA 34 or 40 so it actually looks like a physical iSCSI HBA, but really whats backing it is a VMkernel port with Ethernet cards behind it. Once its enabled, theres a little tab where you can type in the IP addresses of the SCSI target. The last thing to check is to ask yourself, Does the iSCSI system support

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

26

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

SANs key for virtual servers

SAN basics for SMBs

SAN deployment

CHAP authentication, and has it been enabled? CHAP is just an authentication protocol that means you dont only type in the IP address to connect to the iSCSI system; you also need to know the password to connect to that system. In fairness, if youre VLAN-ing stuff off into a private network, most customers dont use CHAP at all, but if youre doing a re-scan and youre not getting the LUNs back, there are basically three reasons why this could be: wrong IP address, the wrong IQN or ... CHAP [hasnt] been enabled. It depends on what your environment is like. If youre in charge of the VMware and the hardware, youre more likely to find that everything fits together. But if you have a storage team separate from the VMware team, there may be a lack of communication about what are the appropriate standards and whats needed to make the thing work. So a lot depends on your environment and whats being implemented. But I think in a nutshell thats what we would do in VMware to enable iSCSI. If youve been doing it for a while, its less than a couple of minutes per server and in my own environment I have all this scripted so if I build a new server, the scripts enable the iSCSI stacks and the scripts enable the configuration of the network. So, going back to the problem of, if you wipe a server and rebuild it, then you have to put it all back, the answer to that is if you go down the route of re-scripted installs, then you just rerun your scripts and everything is rosy in the garden again.
SearchStorage.co.UK: How do you test the performance of an iSCSI storage device? Are there any specific tools you would recommend for this? Laverick: I guess there are two sides of that in terms of tools. There are ones that are vendor-specific, whether thats Citrix, VMware or Microsoft, and then there are more generic ones. From the VMware perspective, there are tools like ESXTOP and VM iSCSI stacks. But, the VM iSCSI stacks are probably better because they focus on a particular VM and what its I/O is. So if youre trying to troubleshoot why this particular VM is having trouble accessing a particular type of storage, this one is a good one to look at. On a more generic level, there are tools like Iometer, which allows you to generate a fake disk load in a virtual machine, and then you can use that fake load to get a baseline of what is to be expected. On a macro level, both VirtualCenter and Microsoft SCVMM (System Center Virtual Machine Manager) have charts that show you what I/O is like. But, I think youre better off with command line tools because they will give you a second-by-second line speed (number of megabytes per second) thats being read or written out with the system, which can be helpful in troubleshooting. I think what I would say about performance is that its all about expectations. And you should expect the same level of performance of iSCSI as you would

Avoiding bottlenecks

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

27

SearchStorage.co.UK Essential Guide to SAN Implementation

STORAGE

SANs key for virtual servers

SAN basics for SMBs

SAN deployment

Avoiding bottlenecks

get out of Fibre Channel. So dont expect an iSCSI system to be slower than Fibre Channel. I think one of the problems the industry has generally is this very mistaken notion that Fibre Channel represents the race horse of storage, iSCSI represents the next step down and NFS represents the donkey of storage. All that is garbage. Ive got customers who use NFS in their environments, and because its being specced appropriately it can outperform the equivalent Fibre Channel. I think in terms of protocolsFC, iSCSI, NFStheyre only really as good as their hardwarethe number of spindles you have, etc. What were really getting out of these protocols are different features and different advantages. You need to look at those advantages and disadvantages and weigh them for you and your business. Which one is the best fit? And you might find yourself using a combo of both. You might use a combo of Fibre Channel with NFS, or iSCSI with NFS. Its rare that I see a combination of Fibre Channel and iSCSI at the same time because they offer very similar features, but people often want a combination of block-based storage (Fibre Channel or iSCSI) and/or something thats NFSor CIFS-based, and thats what leads them down the route of NAS-based technologies. Try not to see it as an either/or option with these protocols. Most of the array vendorsIve got NetApp, Dell EqualLogic and EMC in my environment support all three protocols, so really its up to you to look at the features and say, Well, this is the more appropriate one for my particular usage case. 2

SAN switch configuration

FC SANs for virtual servers

iSCSI for virtual servers

28

SearchStorage.co.UK Essential Guide to SAN Implementation

You might also like