White Paper
EMC MULTI-TENANT FILE STORAGE SOLUTION
Multi-Tenant File Storage with EMC VNX and
Virtual Data Movers
 Provide file storage services to multiple tenants from a single array
 Monetize investments in existing VNX storage capacity
 Realize ROI sooner and reduce storage TCO
Global Solutions Sales
Abstract
This white paper explains how Virtual Data Movers (VDMs) on EMC®
VNX® systems can be configured and leveraged to provide multiple CIFS
and NFS endpoints. This allows service providers to offer multiple file
system containers to multiple tenants on a single or multiple physical
EMC VNX storage arrays.
June 2013
Copyright © 2013 EMC Corporation. All Rights Reserved.
EMC believes the information in this publication is accurate as of its
publication date. The information is subject to change without notice.
The information in this publication is provided “as is.” EMC Corporation makes
no representations or warranties of any kind with respect to the information in
this publication, and specifically disclaims implied warranties of
merchantability or fitness for a particular purpose.
Use, copying, and distribution of any EMC software described in this
publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation
Trademarks on EMC.com.
All trademarks used herein are the property of their respective owners.
Part Number H12051
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
2
Table of contents
Executive summary............................................................................................................................... 5
Business case .................................................................................................................................. 5
Solution overview ............................................................................................................................ 5
Key results and recommendations ................................................................................................... 5
Introduction.......................................................................................................................................... 6
Purpose ........................................................................................................................................... 6
Scope .............................................................................................................................................. 6
Audience ......................................................................................................................................... 6
Terminology ..................................................................................................................................... 6
Technology overview ............................................................................................................................ 8
EMC VNX series ................................................................................................................................ 8
Virtual Data Movers ..................................................................................................................... 8
Physical Data Movers .................................................................................................................. 8
EMC Unisphere ............................................................................................................................ 8
Solution architecture and design........................................................................................................ 10
Architecture overview ..................................................................................................................... 10
Hardware components ................................................................................................................... 11
Software components .................................................................................................................... 11
Network architecture ...................................................................................................................... 11
EMC VNX57XX network elements ............................................................................................... 13
Design considerations ................................................................................................................... 13
Solution validation ............................................................................................................................. 15
Objective ....................................................................................................................................... 15
Test scenario.................................................................................................................................. 15
Server and storage configuration ................................................................................................... 15
VDM configuration ......................................................................................................................... 17
Storage pool configuration ........................................................................................................ 17
Create a VDM ................................................................................................................................. 18
Create a user file system ........................................................................................................... 18
Create a mount point ................................................................................................................. 19
Check VDM status ..................................................................................................................... 20
Create the VDM network interface.............................................................................................. 20
Attach a VDM interface to the VDM ............................................................................................ 21
File system configuration ............................................................................................................... 21
Mount the file system to a VDM ................................................................................................. 21
Export the file system to server hosts ........................................................................................ 21
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
3
VDM configuration summary .......................................................................................................... 22
Scripted deployment ...................................................................................................................... 22
Cloud platform-attached file systems ............................................................................................. 24
VMware ESXi 5.1 NFS data stores .............................................................................................. 24
Test procedures ............................................................................................................................. 25
Use IOzone to generate I/O ....................................................................................................... 25
Test results .................................................................................................................................... 26
Physical Data Mover high availability ........................................................................................ 27
VNX Data Mover load ................................................................................................................. 27
Conclusion ......................................................................................................................................... 29
Summary ....................................................................................................................................... 29
References.......................................................................................................................................... 30
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
4
Executive summary
Business case
Multi-tenancy within private and public clouds includes any cloud architecture or
infrastructure element within the cloud that supports multiple tenants. Tenants can
be separate companies or business units within a company.
To provide secure multi-tenancy and address the concerns of cloud computing,
mechanisms are required to enforce isolation of user and business data at one or
more layers within the infrastructure. These layers include:

Application layer: A specially written multi-tenant application, or multiple,
separate instances of the same application can provide multi-tenancy at this
layer.

Server layer: Server virtualization and operating systems provide a means of
separating tenants and application instances on servers, and controlling
utilization of and access to server resources.

Network Layer: Various mechanisms, including zoning and VLANs, can be used
to enforce network separation.

Storage Layer: Mechanisms such as LUN masking and SAN zoning can be used
to control storage access. Physical storage partitions segregate and assign
resources into fixed containers.
Achieving secure multi-tenancy may require the use of one or more mechanisms at
each infrastructure layer
Solution overview
This white paper focuses on how to enforce separation at the network and storage
layers to allow cloud providers and enterprises to deploy multi-tenant file storage on
EMC® VNX® storage arrays. The deployment of multi-tenant file storage within the
EMC VNX storage platform can act as an enabler for cloud providers and enterprise
businesses to offer File System-as-a-Service to their customers or business units.
The solution described in this white paper uses EMC VNX unified storage and Virtual
Data Mover (VDM) technology, which enables logical partitioning of the physical
resources of the VNX into many “containerized” logical instances to serve multiple
NAS tenants.
Key results and
recommendations
This solution enables private and public cloud providers that are either selling or
supporting (ITaaS) cloud storage services to host multiple NAS file storage
environments on one or more physical EMC VNX storage platforms.
Cloud storage providers who want to offer a choice of multi-tenant NAS file storage
services from multiple storage vendors can now offer EMC VNX file storage to multiple
tenants.
Investments in existing VNX storage capacity can be monetized further through
hosting multiple tenants on a single storage platform helping accelerate the return on
investment and reducing the storage total cost of ownership (TCO).
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
5
Introduction
Purpose
The purpose of this white paper is to provide the necessary level of detail for the
design and deployment of secure multiple file systems within the Data Mover and
VDM constructs of the EMC VNX storage platform, enabling public and private cloud
providers to standardize multi-tenant file storage.
Scope
Throughout this white paper we1 assume that you have hands on experience with the
EMC VNX storage platform including the CLI, and familiarity with EMC Unisphere®.
You should also have a good understanding of networking fundamentals and a good
overall grasp of the concepts related to virtualization technologies, and their use in
cloud and data center infrastructures. Detailed configuration and operational
procedures are outlined along with links to other white papers and documents.
Audience
This white paper is intended for EMC employees, partners, and customers including IT
planners, system architects and administrators, and any others involved who are
interested in deploying file storage to multiple tenants on new or existing EMC VNX
storage platforms.
Terminology
Table 1 shows terminology that is used in this white paper.
Table 1.
1
Terminology
Term
Definition
802.1Q Trunk
A trunk port is a network switch port that passes traffic
tagged with an 802.1Q VLAN IDs. Trunk ports are used
to maintain the VLAN isolation between physical
switches or compatible network devices such as the
network ports on a storage array. An LACP port group
can also be configured as a trunk port to pass tagged
VLAN traffic.
Common Internet File
System (CIFS)
File-sharing protocol based on the Microsoft Server
Message Block (SMB) protocol that enables users to
access shared file storage over a network.
Data Mover
Within the VNX platform offering file storage, the Data
Mover is a hardware component that provides the NAS
presence and protocol support to enable clients to
access data on the VNX using NAS protocols such as
NFS and CIFS. Data Movers are also referred to as XBlades.
In this white paper, "we" refers to the EMC engineering team that validated the solution.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
6
Term
Definition
Domain
Logical grouping of Microsoft Windows servers and
other computers that share common security and user
account information. All resources such as computers
and users are domain members and have an account
in the domain that uniquely identifies them. The
domain administrator creates one user account for
each user in the domain, and the users log in to the
domain once. Users do not log in to each server.
LACP
High-availability feature based on the IEEE 802.3ad
Link Aggregation Control Protocol (LACP) standard
which allows Ethernet ports with similar
characteristics on the same switch to combine into a
single logical port, or link with a single MAC address
and potentially multiple IP addresses. This feature is
used to group ports that appear to be logically larger
links with aggregated bandwidth.
Lightweight Directory
Access Protocol (LDAP)
Industry-standard information access protocol. It is the
primary access protocol for Active Directory and LDAPbased directory servers. LDAP version 3 is defined in
Internet Engineering Task Force (IETF) RFC 2251
Network file system (NFS)
A network file system protocol that allows a user on a
client computer to access shared file storage over a
network.
Network Information
Service (NIS)
Distributed data lookup service that shares user and
system information across a network, including
usernames, passwords, home directories, groups,
hostnames, IP addresses, and netgroup definitions.
Storage pool
Groups of available disk volumes organized by
Automatic Volume Management (AVM) that are used to
allocate available storage to file systems. They can be
created automatically by AVM or manually by the user.
Virtual Data Mover
An EMC VNX software feature that enables the
grouping of file systems, NFS endpoints, and CIFS
servers into virtual containers. These run as logical
components on top of a physical Data Mover.
VLAN
Logical networks that function independently of the
physical network configuration and are a means of
segregating traffic across a physical network or switch.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
7
Technology overview
EMC VNX series
The VNX family of storage arrays is designed to deliver maximum performance and
scalability, enabling private and public cloud providers to grow, share, and costeffectively manage multiprotocol file and block systems. EMC VNX series storage is
powered by Intel processors for intelligent storage that automatically and efficiently
scales in performance, while ensuring data integrity and security.
Virtual Data Movers
A VDM is an EMC VNX software feature that enables the grouping of file systems, CIFS
servers and NFS endpoints into virtual containers. Each VDM contains all the data
necessary to support one or more CIFS servers and NFS endpoints associated with
their file systems. The servers in a VDM store their dynamic configuration information
(such as local users, local groups, shares, security credentials, audit logs, NS Domain
configuration files and so on) in a configuration file system. A VDM can then be
loaded (active state) and unloaded (mounted but inactive state), moved from Data
Mover to Data Mover, or replicated to a remote Data Mover as an autonomous unit.
The servers, their file systems, and configuration data are available in one virtual
container.
VDMs enable system administrators to group file systems and NFS server mount
points. Each VDM contains the necessary information to support one or more NFS
servers. Each VDM has access only to the file systems mounted to that VDM. This
provides a logical isolation between the VDM and NFS mount points.
Physical Data Movers
A physical Data Mover is a component within the VNX platform that retrieves data
from the associated disk storage and makes it available to a network client; the Data
Mover can use the CIFS and NFS protocols.
EMC Unisphere
EMC Unisphere is the central management platform for the EMC VNX series,
providing a single combined view of file and block systems, with all features and
functions available through a common interface. Figure 1 is an example of how the
properties of a Data Mover, named server_2, are presented through the Unisphere
interface on a VNX5700 system.
®
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
8
Figure 1.
The server_2 Data Mover on the Unisphere interface on VNX5700
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
9
Solution architecture and design
Architecture
overview
To validate the functionality and performance of VDMs on the EMC VNX series
storage, we implemented multiple VDMs to simulate a multi-tenant environment.
Each VDM was used as a container that included the file systems exported by the NFS
endpoint. The NFS exports of the VDM are visible through a subset of the Data Mover
network interfaces assigned to the VDM, as shown in Figure 2. The clients can then
access the Data Mover network via different VLANs for network isolation and secure
access to the data.
Figure 2.
Architecture diagram
Within the EMC VNX57xx series used in this solution, the Data Movers and VDMs
have the following features:

A single physical Data Mover supports the NFS services for different tenants
each with their own LDAP, NIS, and DNS configurations by separating the
services for each tenant in their own VDM.

The file systems exported by each VDM are not accessible by users of different
VDMs.

Each tenant is served by a different VDM addressed through a subset of logical
network interfaces configured on the Data Mover.

The file systems exported by a VDM can be accessed by CIFS and NFSv3 or
NFSv4 over TCP protocols. The VDM solution compartmentalizes the file system
resources. Consequently, only file systems mounted on a VDM can be exported
by the VDM.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
10
Hardware
components
Table 2 lists the hardware components used in solution validation.
Table 2.
Hardware components
Item
Units
Description
EMC VNX5700
1
File version: 7.1.56.5
Block version: 05.32.000.5.15
Software
components
Cisco MDS 9509
2
Version 5.2.1
Cisco UCS B200 M2 Blade
Server
4
Intel Xeon X5680, six-core processors, 3.333 GHz,
96 GB RAM
Table 3 lists the software components used in solution validation.
Table 3.
Software components
Item
Version
Description
EMC Unisphere
1.2.2
Management tool for EMC VNX5700
VMware vCenter Server
5.1
2 vCPU, Intel Xeon X7450, 2.66 GHz, 4 GB
RAM
Windows 2008 Enterprise Edition R2 (x64)
Network
architecture
VMware vSphere
5.1
Build 799733
CentOS
6.3
2 vCPU Intel Xeon X5680, 2 GB RAM
Cisco UCS Manager
2.0(4b)
Cisco UCS server management tool
Plink
Release
0.62
Scripting tool
IOzone
4.1.4
I/O generation tool
A key component of the solution is the aggregation and mapping of network ports
onto VDMs. This makes use of industry standard features of the EMC VNX Data Mover
network ports which can tag and identify traffic to a specific logical network or VLAN.
The tagged traffic is then effectively isolated between different tenants and
maintained across the network.
If multiple logical network connections are configured between the clients and the
VNX, the network traffic can be distributed and aggregated across the multiple
connections to provide increased network bandwidth and resilience. SMB3 clients,
such as Windows 8 and Windows Server 2012, detect and take advantage of multiple
network connections to the VNX natively. Similar benefit can be provided to NFS
clients by logically grouping interfaces with the LACP protocol.
When using LACP, traffic is distributed across the individual links based on the
chosen algorithm that is determined by configuration on the EMC VNX and network
switch. The most suitable traffic distribution algorithm should be selected based on
how hosts are accessing and communicating with the storage. When configuring
LACP, the choice of IP, MAC or TCP port-based traffic distribution should be selected
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
11
based on the relationship of host to server, this involves examining how
conversations would occur in the specific environment and if any changes to default
policy are required. The default policy is by IP address-based traffic distribution.
Individual network interfaces and LACP port groups can also be configured as an
802.1Q trunk to pass 802.1Q tagged traffic. An 802.1Q tag is used to identify that a
packet belongs to a specific logical network or VLAN. By assigning multiple logical
interfaces to a trunk port, a different VLAN can be associated to each interface. When
each logical interface is configured for a different VLAN, a packet is accepted only if
its destination IP address is the same as the IP address of the interface, and the
packet's VLAN tag is the same as the interface's VLAN ID.
The Layer 2 network switch ports for servers, including VNX, are configured to include
802.1Q VLAN tags on packets sent to the VNX. The server is responsible for
interpreting the VLAN tags and processing the packets appropriately. This enables
the server to connect to multiple VLANs and their corresponding subnets through a
single physical connection.
The example in Figure 3 shows how a physical Data Mover is configured to support a
tenant user domain in a VDM.
Figure 3.
VDM configuration within the physical Data Mover
In this example, we configured a VDM called VDM-Saturn, which represents a tenant
user. The logical VDM network interface for VDM-Saturn is then called Saturn-if. On
the physical Data Mover we configured an LACP trunk interface, TRK-1; this is
configured to use two 10 Gb Ethernet ports, fxg-1-0 and fxg-2-0.
The trunk port TRK-1 was associated to VLAN A for accessing its defined host network
to enforce tenant and domain isolation. VLAN A was associated VLAN ID on the
network switch to allow communication between clients and the file system.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
12
EMC VNX57XX network elements
Within the EMC VNX57XX series, file system access is made via the network ports on
the physical Data Mover. The EMC VNX can support between two and eight Data
Movers, depending on the model. These are configured as either active or standby. A
Data Mover can be configured using a combination of quad-port
1 Gb or a dual-port 10 Gb network interface cards. Each network interface port
supports the LACP and 802.1Q industry standard features to allow either VLAN trunks
or host mode. Network interfaces can also be combined using LACP to form logical
links.
For more details on the networking aspects of the VNX platform, refer to Configuring
and Managing Networking on VNX.
Design
considerations
The current VDM implementation and functionality has the following characteristics:

The VDM supports CIFS, NFSv3, and NFSv4 protocols over TCP. All other
protocols such as FTP, SFTP, FTPS, and iSCSI are not supported.

The NFSv3 clients must support NFSv3 over TCP to connect a NFS endpoint.
There are a number of contributing factors for how many file systems can exist on a
Data Mover these are, the number of mount points, storage pools, and other internal
file systems. The total number of VDMs, file systems, and checkpoints cannot exceed
2048 per Data Mover.
The maximum number of VDMs per VNX array corresponds to the maximum number
of file systems per Data Mover. A VDM has a root file system which reduces one
number from the total count. Any file systems created on those VDMs will also
continue to decrease the total number. The common practice is to create and
populate the VDM so that it has at least two file systems per VDM. This reduces the
maximum number of VDMs per VNX as follows:
2048/2 = 1024 – 1 (root file system) = 1023
Although this 1023 limit exists, EMC currently supports a maximum of 128 VDMs
configured on a physical Data Mover.
Each physical Data Mover (including all the VDMs it hosts) does not support
overlapping IP addresses spaces. It is therefore not possible to host two different
tenants that use the same IP addresses or subnet ranges on the same Data Mover.
Such tenants must be moved onto separate physical Data Movers.
In provisioning terms, when determining which physical Data Mover on to which to
provision a new tenant, and hence VDM, the provisioning logic must determine if
there is an IP space conflict between the new tenant to be provisioned and the
existing tenants on a physical Data Mover. If there is no clash, the new tenant can be
provisioned to the Data Mover. If there is a clash, the new tenant must be provisioned
to a different Data Mover.
If a physical Data Mover crashes, all of its file systems, VDMs, IP interfaces, and other
configuration are loaded by the standby Data Mover and it takes over the failed Data
Mover’s identity. The result is that everything comes back online as if it were the
original Data Mover.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
13
A manual planning exercise is required to accurately balance workloads between
each physical Data Mover as, in the current implementation, there is no automated
load balancing of VDMs on a physical Data Mover.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
14
Solution validation
Objective
To validate this solution, the objective was to test the configuration of multiple VDMs
for NFS and how they performed under I/O load. Specifically, the network file-based
NFS data stores were configured on NFS file shares. We deployed several open source
based CentOS 6.3 virtual machines to generate I/O activities against these data
stores. The physical Data Mover performance was monitored to ensure CPU and
memory utilization was in line with the design specifications, while multiple VDMs
were used for file access.
Test scenario
To simulate multi-tenant file systems, we configured multiple VDMs on a physical
Data Mover and exported the NFS file systems associated with a VDM to VMware ESXi
5.1 hosts. These hosts are assigned to different tenants who have access to file
storage from different networks and LDAP domains.
There were four VMware ESXi 5.1 hosts in the data center. Each host has data stores
from different NFS shares exported by its designated VDMs. For each tenant, it can
only have access to its designated file system and NFS data stores. Other tenants are
not permitted to have any access to the file systems and NFS data stores in the same
data center.
Server and storage The server and storage configuration for this solution validation test consists of two
VDMs configured on a physical Data Mover for two different tenants, Tenant A and
configuration
Tenant B, as shown in Figure 4.
Figure 4.
Server and storage topology for Tenant A and Tenant B
Each tenant had file access provided by their own VDM. These were named VDMSaturn and VDM-Mercury and attached to different network interfaces configured
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
15
within each VDM. By implementing LDAP and VLANs, each tenant can limit the file
access and maintain distributed directory information over the network.
You can configure single or multiple resolver domain names for a VDM. You must
specify the respective domain name and the resolver value. The VDM domain
configuration includes the NIS, LDAP, DNS, and NFSv4 domains specifications.
For more details on how to manage the domain configuration for a VDM, refer to
Configuring Virtual Data Movers on VNX 7.1.
In the following example, as shown in Figure 5, the VDM VDM-Saturn is configured to
provide file access to Tenant A and it is attached to Network A. The file system
Saturn_File_System is mounted in VDM-Saturn. In the same way, the NFS clients of
Tenant B have access to Mercury_File_System by mounting the NFS export to the IP
address associated with Network B.
Figure 5.
Network interface to NFS endpoint mapping
To configure an NFS server to exclusively serve tenants for a particular naming
domain, the service provider and storage administrator must complete the following
tasks:

Create a new VDM that houses the file system to export for the considered
domain.

Create the network interface(s) for the VDM.

Assign the interface to the VDM.

Configure the domain for the VDM.

Configure the lookup name service strategy for the VDM (optional – if not
configured at the VDM, the services configured on the physical Data Mover will
be used).

Mount the file system(s) on the VDM.

Export the file system(s) for the NFS protocol.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
16
The interfaces mapped between a Data Mover and a VDM are reserved for the CIFS
servers and the NFS server of the physical Data Mover.
The VDM feature allows separation of several file system resources on one physical
Data Mover. The solution described in this document implements an NFS server per
VDM named NFSendpoint. The VDM is used as a container that includes the file
systems exported by the NFS endpoint and/or the CIFS server. The file systems of the
VDM are visible through a subset of the Data Mover network interfaces attached to
the VDM.
The same network interface can be shared by both CIFS and NFS protocols on that
VDM. The NFS endpoint and CIFS server are addressed through the network interfaces
attached to that VDM.
VDM configuration
The command line interface (CLI) must be used to create the VDMs, using nasadmin
or root privileges to access the VNX management console.
The following steps show how to create a VDM on a physical Data Mover for Tenant A
in a multi-tenant environment. To support multiple tenants, multiple VDMs are
required to provide file access. The procedure to create a VDM can be repeated for
additional tenant VDM creation as required.
Storage pool configuration
Before a VDM can be created on a Data Mover, a storage pool must be configured on
the VNX to store the user file systems. In this example, we configured a storage pool
named FSaaS-Storage-Pool. Its properties are shown in Figure 6.
Figure 6.
Configuring a storage pool named FSaaS-Storage-Pool in Unisphere
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
17
For more information on file systems, refer to Managing Volumes and File Systems
with VNX Automatic Volume.
Create a VDM
The VNX CLI command, in Figure 7, shows how to create VDM-Saturn which is used for
Tenant A file access on Data Mover server_2.
Figure 7.
Creating the VDM named VDM-Saturn
When using default values, the VDM is created in a loaded state.
Note
The system assigns default names for the VDM and its root file system.
You can use the same command to create VDM-Mercury for Tenant B, as shown in
Figure 8.
Figure 8.
Creating the VDM named VDM-Mercury
Create a user file system
The CLI command, in Figure 9, shows how to create a file system named
Saturn_File_System, with 200 GB storage capacity, from the storage pool FSaaSStorage-Pool.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
18
Figure 9.
Creating the Saturn file system
Create a mount point
The CLI command, in Figure 10, shows how to create the mount point for
/SaturnFileSystem for the Saturn_File_System on VDM-Saturn.
Figure 10.
Mount point setup for VDM-Saturn
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
19
Check VDM status
To validate the VDM-Saturn properties that you configured, you can run the command
as shown in Figure 11.
Figure 11.
Validating the VDM-Saturn setup
Create the VDM network interface
The network interface for Saturn-if is created for device trunk1 with the following
parameters, as shown in Figure 12:

IP address: 10.110.46.74

Network mask: 255.255.255.0

IP broadcast address: 10.110.46.255
Figure 12.
VDM network interface setup
To achieve the maximum security and domain/tenant separation, each VDM must
have its own dedicated VDM network interface. The same network interfaces cannot
be shared between different VDMs.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
20
Attach a VDM interface to the VDM
The CLI command, in Figure 13, shows how to attach the network interface Saturn-if
to VDM-Saturn.
Figure 13.
File system
configuration
Attaching the VDM interface
You can use the CLI for file system configuration by mounting the file system to a VDM
and exporting it to server hosts.
Mount the file system to a VDM
You can mount the Saturn_File_System on /SaturnFileSystem on the VNX NFS server,
as shown in Figure 14.
Figure 14.
Mounting the file system to the VDM
Export the file system to server hosts
In the example in Figure 15, we exported the Saturn_File_System, using the NFS
protocol, to a VMware ESXi 5.1 host with the IP address 10.110.46.73.
Figure 15.
Note
Exporting the Saturn file system to an ESXi host
A state change of a VDM from loaded to mounted, temp-unloaded, or permunloaded shuts down the NFS endpoints in the VDM, making the file systems
inaccessible to the clients through the VDM.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
21
VDM configuration
summary
Table 4 summarizes the process of VDM creation and exporting the file systems to
vSphere ESXi 5.1 hosts.
Table 4.
VDM tenant configuration
Parameter
Tenant A
Tenant B
VDM name
VDM-Saturn
VDM-Mercury
Storage pool
FSaaS-Storage-Pool
FSaaS-Storage-Pool
User file system
Saturn_File_System
Mercury_File_System
Mount point on
VNX NFS server
/SaturnFileSystem
/MercuryFileSystem
VDM interface
Saturn-if with IP address:
10.110.46.74
Mercury-if with IP
address: 10.110.47.74
VDM network
VLAN-A
VLAN-B
File export host
Host A with IP address:
10.110.46.73
Host B with IP address:
10.110.47.73
By accessing different VLANs and networks, both Tenant A and Tenant B have their
own VDM interfaces and host networks. The user file systems for Tenant-A and
Tenant-B can be created from either the same storage pool or different storage pools,
depending on tenant service requirements.
Scripted
deployment
For large-scale deployments, you should consider using scripting tools to speed up
the process of VDM creation and its associated file system mount and export
procedures.
You can use Plink to access the VNX Control Station via SSH. Plink can be
downloaded
from:http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
Figure 16 shows an example of running Plink from a Windows 7 command console to
create VDM-Mars from a script.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
22
Figure 16.
Running Plink
A sample script file to create VDM-Mars for Tenant M and export its associated user
file system is shown in Figure 17.
Tenant-M has the same profile attributes as Tenant-A and Tenant-B, as listed in Table
4.
Figure 17.
Example Plink script
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
23
Cloud platformattached file
systems
VMware ESXi 5.1 NFS data stores
On VMware ESXi hosts, you can create data stores using an NFS file exported from
VNX, as shown in Figure 18.
Figure 18.NFS data store on ESXi hosts
You must specify the NFS server which is running on the specific VDM and the shared
folder, as shown in Figure 19.
Figure 19.
Selecting the server, folder, and data store
As shown in Figure 20, NFS-Datastore-Saturn is created from NFS server
10.110.46.74, the shared folder is /SaturnFileSystem.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
24
Figure 20.
Test procedures
NFS data store details
The tests documented in this white paper are as follows:
1.
Creating the data stores on the NFS file systems exported by the VDMs.
2.
Installing and configuring eight CentOS 6.3 virtual machines on the NFS data
stores.
3.
Running I/O workloads on all eight CentOS virtual machines with 128 threads
to simulate 128 VDMs against NFS data stores using IOzone.
4.
Failing over the active physical Data Mover with VDMs configured to the
standby Data Mover.
5.
Verifying the benchmarking tests with no disruption during physical Data
Mover failover.
6.
Monitoring the physical Data Mover CPU and memory utilization during the
I/O workload using VNX Performance Monitor.
Use IOzone to generate I/O
The CentOS 6.3 virtual machines generated I/O using open source IOzone. IOzone is
a file system workload generation tool. The workload generates and measures a
variety of file operations. IOzone is useful for performing broad file system analysis of
a vendor’s computer platform. The workload tests file I/O for the following
operations:
Read, write, re-read, re-write, read backwards, read strided, fread, fwrite, random
read, pread, mmap, aio_read, aio_write
IOzone is designed to create temporary test files, from 64 KB to 512 MB in size, for
testing in automatic mode. However, the file size and I/O operation can be specified
depending on the test. In our test, we used the following I/O parameters:

Read – Reading a file that already exists in the file system.

Write – Writing a new file to the file system.

Re-read – After reading a file, the file is read again.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
25

Re-write – Writing to an existing file.
We installed the latest IOzone build on the virtual machines and ran the following
commands from the server console:
#wget http://www.iozone.org/src/current/iozone3_414.tar
#tar xvf iozone3_414.tar
#cd iozone3_414/src/current
#make
#make linux
Figure 21 shows a test run where all I/O read and writes were set with a file size of
1024 KB.
Figure 21.
Note
Test results
Test run for 1024 KB file size
In this test, IO-zone was used for running read and write I/Os on the CentOS
6.3 virtual machines to validate the VDM functionality. We did not undertake
a full scale performance test to evaluate how a physical Data Mover performs
with multiple file systems and 128 VDMs configured, while running intensive
I/Os.
The VDM functionality validation test results are summarized in Table 5.
Table 5.
VDM functionality validation results
Action
Validation results
Create NFS data stores
Yes
Install virtual machines on NFS data
stores
Yes
Power up/shut down virtual machines
successfully
Yes
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
26
Action
Validation results
Run I/Os from virtual machines against
NFS data stores
Yes
Physical Data Mover high availability
By running all eight CentOS 6.3 virtual machines with 128 threads of read, write, reread and re-write I/O operations, we were able to produce the approximate
equivalent overhead to that of a physical Data Mover with 128 VDMs configured with
I/O running on each VDM.
We executed the following command on a VNX control station to make active Data
Mover failover to the standby Data Mover:
# server_standby server_2 -a mover
The failover process completed in less than 30 seconds and all I/O operations were
restored without any disruption. For most applications running on NFS-based storage
and data stores, the IOzone throughputs observed for write, re-write, read, and reread were well within acceptable performance levels.
VNX Data Mover load
While running 128 threads of I/O workload, the VNX Data Mover load is monitored. As
shown in Figure 22, the Data Mover CPU utilization is approximately 55 percent while
the Free Memory is approximately 50 percent. This is well within the system design
specification range.
Figure 22.
VNX Data Mover Performance Monitor
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
27
Based on the test results, the clients should not expect any significant performance
impact since VDMs perform in the same way as the physical Data Mover and a user’s
ability to access data from a VDM is no different from accessing data residing on the
physical Data Mover, as long as the maximum number of supported VDMs are not
exceeded on a physical Data Mover.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
28
Conclusion
Summary
EMC VNX VDMs provide a feasible way to support file system services for multiple
tenants on one or more physical EMC VNX storage systems for private and public
cloud environments.
VDMs can be configured via the VNX CLI and can be managed within the VNX
Unisphere GUI. Also, by adopting best practice for security and network planning, the
implementation of VNX VDMs can enhance the file system functionality and lay the
foundations for multi-tenant File System-as-a-Service offerings.
This solution enables service providers that are offering cloud storage services to
host up to 128 VDMs on one physical EMC VNX storage platform while maintaining
the required separation between tenants.
Cloud storage providers who want to offer a choice of multi-tenant NAS file system
services from multiple storage vendors can now offer EMC VNX file systems to
multiple tenants. This allows investments in existing VNX storage capacity to be
monetized further, helping to accelerate their return on investment and reduce their
storage TCO.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
29
References
For specific information related to the features and functionality described in this
document refer to:

VNX Glossary

EMC VNX Command Line Interface Reference for File

Managing Volumes and File Systems on VNX Manually

Managing Volumes and File Systems with VNX Automatic Volume Management

Problem Resolution Roadmap for VNXVNX for File Man Pages

EMC Unisphere online help
EMC VNX documentation can be found on EMC Online Support.
EMC Multi-Tenant File Storage
Multi-Tenant File Storage with EMC VNX
30