Data ontap 7 Mode Fundamentals

advertisement
Data ontap 7 Mode Fundamentals
Netapp storage environment
-
NetApp offers different series of storage systems to suit a broad range of business needs,
including the FAS6200 series, FAS3200 series, and FAS2200 series. Bob says JAF also uses
NetApp V-Series storage solutions. Because you have not heard of the V-Series before, you
ask him to explain what it is. V-Series open-storage controllers let you manage disk arrays
from EMC®, IBM®, Hewlett-Packard® Company, Hitachi® Data Systems®, and other storage
vendors as easily as you manage NetApp storage.
o
o
o
o
o
-
FAS6200 series – high end data centre storage
 Rely on the versatility, scalability, and reliability of the FAS200 series for
your largest enterprise applications netTheand your most demanding
technical workloads. Achieve lower acquisition and operation costs
compared to traditional, large-scale storage.
FAS3200 series- mid range data centre storage
 Provides flexibility, performance, and availability, as well as the
responsiveness to growth that a high-bandwidth 64-bit architecture
provides.
FAS2200 series- low end data centre storage
 allows management for growing, complex data in dispersed departments or
remote locations and adds functionality, ease and is cost-effective
V6200 series- high end data centre storage
 Can handle large enterprise and technical applications in multiprotocol,
multivendor storage environments.
V3200 series- mid range data centre storage
 Allows for advanced data-management and storage-efficiency capabilities in
multiprotocol, multi-vendor environments.
The Netapp storage systems support three different types of disks;
o Solid-state drives (SSD)
o Serial attached SCSI (SAS)
o Serial advanced technology attachment (SATA)
Flash cashe
-
is used to optimize the performance of random-read-intensive workloads— such as file
services, messaging, virtual infrastructure, and OLTP databases—without using additional
high-performance disk drives. This intelligent read cache speeds access to data, reducing
latency by a factor of 10 or more compared to disk drives. Faster response times can
translate into higher throughput for random I/O workloads. Flash Cache can often be
combined with SATA drives in place of faster but more expensive SAS drives
-
Lan
o
-
-
Is a computer network covering a small physical area, like an office or group of
company buildings. LANs have higher data transfer rates and cover a smaller
geographic area than WANs
Incremental backup
o Is a backup of just the changed data in a system, first a baseline backup is created of
all data, then additional incremental backups are created overtime. Incremental
backups save space by only storing data once
Primary storage system
o The primary storage system is the system for which data is backed up
Secondary storage system
o The secondary storage system is the system to which data is backed up
Data recovery site
o A data recovery site is a location that stores data from which data recovery can take
place.
-
-
-
-
-
-
-
-
Storage efficiency
o NetApp storage systems use a variety of technologies to provide efficient data
storage.
Snapshot technology
o Snapshot technology provides online backups and immediate access to previous
versions of data.
Thin provisioning
o The technical foundation for thin provisioning with NetApp technology is provided
by FlexVol® volumes.
FlexVol® volumes
o FlexVol flexible volumes are volumes that you can expand or shrink. By combining
FlexVol volumes with Data ONTAP thin-provisioning functionality, you can flexibly
oversubscribe disk space.
FlexClone® technology
o FlexClone technology relies heavily on the same principle as Snapshot technology.
FlexClone technology enables users to quickly create dataset clones that consume
almost no additional disk space, even if a user creates multiple FlexClone copies.
Deduplication
o Deduplication is the process of improving storage space efficiency by eliminating
redundant data objects and referencing just the original object.
RAID-DP® technology
o RAID-DP stands for redundant array of independent disks, double-parity. RAID-DP
technology protects against disk failure by computing parity information based on
the contents of all of the disks in an array. RAID-DP stores all parity information on
two disks.
Multistore software
o MultiStore software increases the value of shared infrastructure by enabling secure
partitioning of storage and network resources.
ONTAP 7-Mode operating system. Compression With NetApp software, data on a disk is compressed
and decompressed using standard compression algorithms. NetApp storage encryption Storage
encryption is an optional feature that you can enable for additional data protection. It is available on
certain supported storage controllers and disk shelves that contain disks with built-in encryption
functionality. Non-disruptive upgrading to 64-bit aggregates Starting with the Data ONTAP 8.1 7Mode operating system, you can upgrade a 32-bit aggregate to a 64-bit aggregate non-disruptively,
so that the aggregate can increase its storage limit to 160 TB. IPv6 Data ONTAP 8.1 7-Mode supports
IPv6. Administrators can now easily plug the Data ONTAP 8.1 7-Mode operating system in to an
existing 128-bit addressing in a network. In Data ONTAP 8.1.1 7-Mode operating system, Flash pools
introduce a high-speed solid-state-drive tier into a standard aggregate. The solid-state-drive tier acts
as cache for data and metadata within the aggregate. The BranchCache feature is introduced from
Server Message Block 2.1 in the Windows 7 and Windows Server 2008 R2 operating systems. It can
increase the network responsiveness of centralized applications that are accessed from remote
offices, which gives users in remote offices an experience like that of working on a LAN. The
BranchCache feature also reduces WAN use.
Data ONTAP, the operating system that NetApp storage systems use, is comprised of modules. These
modules pass data to and from the disks for writes and reads. Although FreeBSD is familiar to Data
ONTAP GX users, it is a departure from the monolithic Data ONTAP 7G, which was an operating
system and tightly coupled file system. The use of FreeBSD as the operating system for Data ONTAP
7-Mode provides some significant benefits. In addition to benefits to the Data ONTAP operating
system from third-party work taking place within the FreeBSD community, the clean separation of
the operating system from the file system allows for focused innovation within the file system itself.
This diagram shows the software stack that comprises Data ONTAP 7-Mode. Although 7-Mode runs
in FreeBSD with a new data component called the D-blade (for data blade) and a new management
component called the M-host, it acts very much like Data ONTAP 7G. Notice specifically that the NAS
and SAN protocols are handled by the Dblade. In addition, there continues to be one logical interface
for both client and administrative access to the node. The D-blade manages the storage attached to
a node and provides the WAFL file system that is used to map data containers and their associated
metadata and attributes to disk blocks. In 7-Mode, the D-blade services NAS and SAN protocol
requests. It also provides a UI that is compatible with the Data ONTAP 7G operating system.
-
-
-
-
Free BSD
o Data ontap sits on top of FreeBSD
SMF
o The simple management framework (SMF) layer interacts with the Data ONTAP
operating system to collect system information that is sent to SNMP UIs such as
Netapp system manager, as well as to APIs
M-Host
o The management host is a network interface that is used for management functions
such as SNMP or for access to the console
Network
o The network interface module receives data from the client and delivers it to the
physical RAM
Protocols
o
-
-
The protocol module determines the protocol that is used to transfer data- such as
CIFS, NFS, or iSCSI- then strips the protocol information from the data and sends the
raw data to the WAFL file system
WAFL
o The WAFL file system module receives the raw data and places a copy into nonvolatile RAM (NVRAM)
RAID
o The WAFL file system sends data to RAID, which calculates parity to protect data
Storage
o RAID sends the data and parity information to the storage module. The storage
module physically performs the write to disk.
Step 1: When a client sends a write request to Data ONTAP, the request goes to the network
module. The network module sends the data to physical RAM. Step 2: The protocol module
separates the data and the protocol according to the protocol's rules. Because JAF has a Microsoft®
Windows® environment, the protocol is CIFS. Step 3: The WAFL file system receives the raw data and
saves a copy of the data in NVRAM. Then the WAFL file system sends an acknowledgement that it
received the data back to the client. The primary job of the WAFL file system is to determine how
the data will be written when the write is performed. Meanwhile, copies of all write requests are
stored in NVRAM as a backup that is used for emergencies. Because NVRAM is backed by a battery,
the write request will survive even if power is lost. Step 4: The WAFL file system continues to take
incoming data and decides how to write data on disk until a consistency point, or CP, occurs. This
typically happens either every 10 seconds, or when NVRAM is half full. During a CP, the half of
NVRAM that contains a backup of write requests that are in the process of being written is locked.
The other half of NVRAM is used for incoming requests. When the CP is completed, the locked
portion of NVRAM is flushed and made ready for use. Step 5: When the CP occurs, The WAFL file
system passes data to the RAID module. The RAID module calculates parity and adds parity
information to the data that will be sent to disk. RAID then notifies the storage module that the data
is ready. Step 6: The storage module physically performs the write request. The optional
Performance Acceleration Module allows customers to extend memory for system reads. It allows to
readily access data in memory for read requests by extending system RAM.
Features of oncommand system manager
Oncommand system manager includes the following features:
-
-
-
-
-
-
Seamless windows and Linux integration
o System manager integrates seamlessly into your management environment by
providing support to windows and linux operating systems
Discovery and setup
o System manger lets you quickly discover a storage system or a high-availability
configuration on a network subnet. You can easily set up a new system and
configure it for storage
iSCSI and FC
o system manager manages iSCSI and FC protocol services for exporting data to host
systems
SAN provisioning
o System manager provides workflow for provision with NAS protocols such as CIFS
and FS, as well as management of shares and exports
NAS provisioning
o System manager provides ongoing management of your storage system or high
availability configuration. System manager provides real-time monitoring and
notification of key health-related events for a Netapp system
Monitoring and management of storage systems
o System manager provides setup for HA configuration of Netapp system.
High availability configuration
o System manager provides setup for HA configuration of Netapp systems
Protection
o System manger allows you to protect data by using Netapp snapmirror technology.
After installing OnCommand System Manager, administrators can either discover or manually assign
storage systems to manage. When you know the host name or IP address of a storage system, you
can use the Add a System dialog box to add a storage system or an active/active pair to the list of
managed storage systems. The Discover Storage Systems dialog box lists all of the storage systems
that have been discovered by System Manager. You can use this dialog box to discover storage
systems or HA pairs on a network subnet and add the systems to the list of managed systems. When
you add one of the systems in an HA pair, the partner system is automatically added to the list of
managed systems.
From the Help table of contents and index, you can find information about features of OnCommand
System Manager and how to use them.
OnCommand Unified Manager includes three major products: Operations Manager, Protection
Manager, and Provisioning Manager. Operations Manager delivers comprehensive monitoring and
management for NetApp enterprise storage. From a central point of control, Operations Manager
provides alerts, reports, and configuration tools to help administrators keep their storage
infrastructure in alignment with business requirements. Protection Manager enables storage
administrators to simplify management and increase the success of backup and recovery operations
by providing easy-to-use policies and global monitoring of data protection operations. Finally,
Provisioning Manager automates policy-based provisioning for NetApp NAS and SAN environments.
This solution automates the manual and repetitive provisioning process, increasing the productivity
of administrators and improving the availability of data.
Module two: Hardware basics
-
Data ontap 7- mode operating system
o Use our Flash Cache 2 to optimize the performance of random read intensive
workloads such as file services, virtual infrastructure, and databases, without adding
more highperformance disk drives. These intelligent read caches can reduce latency
by a factor of 10 or more compared to disk drives. Faster response times can
translate into higher throughput for random I/O workloads. Flash Cache 2 family
modules give you performance that is comparable to that of solid state disks (SSDs).
It's all automatic because every volume and LUN behind the storage controller is
subject to caching. You can simulate the results of adding cache to your current
storage system using the Predictive Cache Statistics, a feature of the NetApp Data
ONTAP 7-Mode operating system that generates information that indicates whether
caching modules will help and how much additional cache is optimal for your
workload
Newer models of NetApp storage systems include a new interface named e0M. This interface is
dedicated to Data ONTAP management activities. This interface enables you to separate
management traffic from data traffic on your storage system for security and throughput benefits.
On a storage system, that includes the e0M interface, the Ethernet port that is indicated by a wrench
icon on the rear of the chassis connects to an internal Ethernet switch. The internal Ethernet switch
then provides connectivity to the e0M interface and the RLM. When you set up a system that
includes the e0M interface, the Data ONTAP setup script informs you that for environments that use
dedicated LANs to isolate management traffic from data traffic, e0M is the preferred interface for
the management LAN. The setup script then prompts you to configure e0M. The e0M configuration
is separate from the RLM configuration. Both configurations require unique IP and MAC addresses to
allow the Ethernet switch to direct traffic to either the e0M interface or the RLM. Although the e0M
interface and the RLM both connect to the internal Ethernet switch that connects to the Ethernet
port, which is indicated by a wrench icon on the rear of the chassis, the e0M interface and the RLM
serve different functions. The e0M interface serves as the dedicated interface for environments that
have dedicated LANs for management traffic. You use the e0M interface for Data ONTAP
administrative tasks. The RLM, on the other hand, can be used not only for managing Data ONTAP,
but also to provide remote management capabilities for the storage system, including remote access
to the console, monitoring, troubleshooting, logging, and alerting features. Also, the RLM stays
operational regardless of the operating state of the storage system and regardless of whether Data
ONTAP is running or not. Once e0M is configured, you can open a Telnet, RSH, or SSH session on a
client. Note: Secure shell (SSH) administration is preferred and is automatically enabled in Data
ONTAP 7-Mode. Telnet and Remote Shell (RSH) are disabled by default and are not recommended by
NetApp® as methods for administrating storage systems.
Data ONTAP connects with networks through physical interfaces (or links). The most common
interface is an Ethernet port, such as e0a, e0b, e0c, and e0d. Data ONTAP has supported IEEE
802.3ad link aggregation for some time now. This standard allows multiple network interfaces to be
combined into one interface group. In this example, e0a, e0b, e0c, and e0d have been aggregated or
combined into one interface group called grp1. After being created, this group is indistinguishable
from a physical network interface.
A disk is the basic unit of storage for storage systems running Data ONTAP. Understanding how Data
ONTAP uses and classifies disks will help you manage your storage more effectively. Data ONTAP
supports five disk types: Fibre Channel, also known as FC, Advanced Technology Attachment, or ATA;
Serial Advanced Technology Attachment, or SATA, Serial-Attached SCSI, or SAS, and Solid-State
Drive, or SSD. For a specific configuration, the disk types supported depend on the storage system
model, the disk shelf type, and the I/O modules installed in the system. Administrators can always
look up specific information when they need it on the NOW, or "NetApp on the Web," site. Data
ONTAP supports two disk connection architectures: Fibre Channel-Arbitrated Loop, or FC-AL, and
Serial-Attached SCSI, or SAS. FC and ATA disks use the FC-AL disk connection architecture. SAS, SATA,
and SSD use SAS disk connection architecture.
Data ONTAP classifies disks as one of four types for RAID: data, hot spare, parity, or double-parity.
The RAID disk type is determined by how RAID is using a disk. Later you will use this information to
make decisions about increasing the size of an aggregate or deciding how many disks to use when
creating an aggregate.
Data disk: A data disk is part of a RAID group and stores data on behalf of the client.
Hot spare disk: A hot spare disk does not hold usable data, but is available to be added to a RAID
group in an aggregate. Any functioning disk that is not assigned to an aggregate but is assigned to a
system, functions as a hot spare disk.
Parity disk: A parity disk stores data reconstruction within a RAID group.
Double-Parity disk: A double-parity disk stores double-parity information within RAID groups if
NetApp RAID software, double-parity (RAID-DP®) is enabled.
Disk ownership determines which node owns a disk and which pool a disk is associated with.
Understanding disk ownership enables you to maximize storage redundancy and manage your hot
spares effectively. In a stand-alone storage system that does not use SyncMirror®, disk ownership is
simple-each disk is assigned to the single controller and is in Pool0. However, the following two
configurations are more complicated than a stand-along system:  High-availability configurations,
because two controllers are involved  SyncMirror configurations, because two pools are involved
Disk ownership is software-based. Software-based disk ownership is stored on the disk rather than
determined by the topology of the storage system's physical connections as it might have been in
previous versions of Data ONTAP. Software-based disk ownership gives you greater flexibility and
control over disk use.
NetApp System Manager provides ongoing management of your storage system, including
information about your hardware. The disk list displays the name and the container for each disk.
Module three: creating aggregates and volumes
Raid groups
All Data ONTAP disks are organized into RAID groups. RAID groups provide parity protection
against data loss. Each RAID group consists of data disks, a parity disk (RAID 4), and a double-parity
disk (RAID-DP). A double-parity RAID group must have at least three disks: one or more data disks, a
parity disk, and a double-parity disk. You can add disks to your RAID groups to increase usable disk
space; however, you cannot remove disks to reduce disk space.
-
-
-
Double parity disk
o A double parity disk is used for raid dp technology. Together with the parity disk and
a double parity disk, raid dp will prevent data loss even if two disks fail within a raid
group
Parity disk
o A parity disk is a disk that is used for RAID 4 and RAID-DP. The parity disk protects
against a single disk failure
Data disks
o A data disk is a disk that holds onto information
Raid groups: disk failure
If a data disk failure occurs in a RAID group, Data ONTAP will replace the failed disk with a spare disk.
Data ONTAP will automatically use parity data to reconstruct the failed disk's data on the
replacement disk. Meanwhile, Data ONTAP continues to serve data to clients by reconstructing data
from the parity disk while the new data disk is under construction. If a parity or double-parity disk
failure occurs in a RAID group, Data ONTAP will replace the failed disk with a sparse disk and
reconstruct parity for the new disk.
Understanding aggregates
An aggregate is a logical container that encompasses the physical aspects of storage, such as
disks and RAID groups. Aggregates provide the storage for volumes, and volumes provide support
for the differing security, backup, performance, and data sharing needs of your users. Each
aggregate has its own RAID configuration, RAID groups, and set of assigned disks. When you create
an aggregate, Data ONTAP assigns the data disks and parity disks to RAID groups used to create the
aggregate. You can increase the size of your aggregate by either adding disks to an existing RAID
group or by adding new RAID groups; however, you cannot remove a disk to reduce storage space.
You can use the aggregate to hold one or more FlexVols. FlexVols are the logical file systems that
share the physical storage resources, RAID configuration, and plex structure of that containing
aggregate.
Aggregates
NetApp System Manager provides a wizard-like interface for creating, viewing, and editing
aggregates. Before you create a new aggregate for JAF Corporation's sales department, familiarize
yourself with these specific parameters
Name: Your aggregate name must begin with a letter or an underscore (_) and should not contain
more than 255 characters.
RAID Type: Using double parity provides disk protection against single-disk or double-disk failure
within a RAID group.
Double parity is the default parameter.
Disk Selection Method: This parameter allow you to select manual or automatic disk selection. It is
recommended that you choose, "Allow system to selects disks automatically based on the required
aggregate size".
Disk Type: The disk type will be Fibre Channel-Arbitrated loop (FC-AL) or Serial-Attached SCSI (SAS).
Disk Size: The number of disks will be calculated for you based on the usable size you indicate.
64 bit aggregates
Prior to Data ONTAP 8.0, aggregates-and consequently the volumes within the aggregateswere based upon a 32-bit architecture. This limited the total size of the aggregate to 16 TB. The issue
with the 16 TB limitation of a 32-bit aggregate is that aggregate management becomes more
complex and less efficient. For example, a FAS6280 system with 500 TB of storage requires a
minimum of 32 aggregates. This greatly increases the complexity of managing large storage arrays.
Starting with Data ONTAP 8.0, you can create aggregates that are either 32-bit or 64-bit. 32-bit and
64-bit aggregates can coexist on the same storage system. 64-bit aggregates support a maximum
size of up to 100 TB, depending on the storage system model. Bob explained that with the latest
version of Data ONTAP 7-Mode (Data ONTAP 8.1 7- Mode) newly created aggregates are 64-bit by
default. In 8.1 7-Mode, you have to explicitly specify the 32-bit option to create a 32-bit aggregate.
The aggregate management is very easy with 64-bit aggregates as you can create 500 TB of storage
with just five 64-bit aggregates.
Flash pools
Traditional aggregates are built with matching disk types. Aggregates are constructed from RAID
groups of SATA, FC, SAS, or solid-state drives. Hard-disk types cannot be mixed within an aggregate.
Flash pools introduce a high-speed solid-state-drive tier into a standard aggregate. The solid-statedrive tier acts as cache for data and metadata within the aggregate. The benefits of flash pools
include improved cost and performance with fewer spindles, less rack space, and lower power and
cooling requirements; highly available storage with a simple administrative model; improved cost-toperformance and cost-to-capacity ratios compared to a combination of solid-state drives and SATA
with pure FC or SAS solutions; predictable and improved operation when running in degraded mode,
during controller failures, and during high-availability takeover and giveback; and automatic,
dynamic, and policy-based placement of data on appropriate storage tiers at WAFL block granularity
for data and metadata.
Flash pools, what is cached?
Unlike the Flash Cache feature, which caches only read data, flash pools cache reads, writes, and
metadata. Read-cached copies of a block in the hard-disk tier of the aggregate can be stored in the
solid-state-drive tier to service read operations. Almost all the data from the active file system in a
read/write volume can be read-cached into the solid-state-drive region of the aggregate. Writecached blocks are data blocks. Write-cached blocks are associated with FlexVol volumes and are
written directly to the solid-state-drive region of the aggregate. There is only one copy of the block,
and that copy is in the solid-state-drive tier. The write-cached block in the solid-state drive has a
hard-disk block reserved for it. The write-cached block is eventually moved to the hard-disk block. In
addition to read and write blocks, all the metadata that is associated with the flash pool is stored in
the solid-state-drive tier of the aggregate
How to create a flash pool
You can enable flash pools on new or existing aggregates by completing three steps. First, select a
new or existing aggregate. Next, turn on the hybrid_enabled option on the aggregate. Then add a
new solid-state-drive RAID group to the aggregate. Completing these steps converts the aggregate to
a flash pool and activates the storage tiers.
Flash pools: compatibility
T Flash pools are incompatible with 32-bit aggregates, traditional volumes, SnapLock
software, and aggregates that were created when using operating systems earlier than Data ONTAP
7.2 operating system.
Flash pools: additional considerations
The capacity of the solid-state-drive tier is not reflected in the total aggregate size. For example, if
the original aggregate has a 10-terabyte capacity, and you add a solid-statedrive RAID group with a
1-terabyte capacity, the amount of capacity in the aggregate that can be provisioned is still 10
terabytes. Flash pools can coexist in the same cluster or on the same storage controller as the Flash
Cache feature, but blocks from flash pools are not included in Flash Cache caching. The Flash Cache
feature continues to serve all aggregates that are not flash pool aggregates on the controller. Not
only are flash pools compatible with takeover and giveback, but they also provide performance
acceleration during such events. Regardless of whether the source aggregate is a flash pool, a
volume can be moved to a flash pool aggregate. The volume will not be immediately cached after
the move, and performance might degrade slightly until the cache is repopulated. SnapMirror
destination volumes can reside in flash pools, but the SnapMirror destination will not be cached.
When the volume is promoted to read/write, the data from the new active file system will be
cached. Because solid-state-drive blocks can become trapped in Snapshot copies, the aggregate
Snapshot feature should be disabled or configured with automatic deletion so that solidstate-drive
blocks continue to be recycled. Flash pools are fully supported with SyncMirror and MetroCluster
software. Flash pools support read caching of uncompressed blocks in a compression-enabled
volume but do not yet support caching of blocks that are compressed in the hard-disk tier.
Compressed blocks are never write-cached. Flash pools are supported on V-Series systems with
NetApp storage but not with storage other than NetApp storage
Volumes
-
-
-
-
Flexible volumes (FlexVol)
o A flexible volume is a volume that is loosely coupled to its containing aggregate. A
flexvol can share its containing aggregate with other volumes
FlexCone Volume
o Flexclone volumes are writable, point-in-time copies of a parent flexvol. Often you
can mange them as you would a regular flexvol, but they also have some extra
capabilities and restrictions. Their parent volumes share the same disk space for any
cmmon data. You can sever the connection between the parent and the clone. This
is called splitting the Flexclone volume, a Flexclone volume is created, any logical
unit numbers, or LUNs, present in the parent volume are present in the Flexclone
volume, but are unmapped and offline. Splittting a Flexclone volume removes all
restrictions on the parent volume and causes the Flexclone volume to use its own
storage.
Traditional volume
o Traditional volumes combine the physical layer of storage with the logical layer of
the file systems. This means they are tied to the aggregate. It also means you can
only increase the size of a traditional volume by adding disks. It is not possible to
shirnk a traditional volume.
Cache volume
o A cache volume, also known as flexcahce volume, allows you to keep copies of
primarily read only files on multiple storage systems to reduce overhead vaused by
bandwidth usage and latency when relaying data over a long distance. To use this
feature, a flexcache license is required on both the origin system and the flexcache
storage system.
Advantages of flexible volumes.
A FlexVol, also called a flexible volume, is contained within an aggregate. An aggregate can contain
multiple FlexVol. Because a FlexVol is managed separately from the aggregate, you can create small
FlexVol that are 20 MB or larger, and you can increase or decrease the size of FlexVol in increments
as small as 4 KB. Flexible volumes represent a significant administrative improvement over
traditional volumes
Root volume
Each storage system contains a root volume that contains special directories and configuration files
that help you administer your storage system. The root volume must have enough space to contain
system files, log files, and core files. You can edit the configuration, check logs, and check
permissions that reside in the root volume with NetApp System Manager or directly using the
command line. If a system problem occurs, these files are used to provide technical support. Do not
delete any directories from the /etc directory unless instructed to do so by technical support
personnel. Some of the configuration files in the /etc directory can be edited to change the behavior
of the storage system. Administrators can edit files from a client or using a graphical interface
application such as NetApp System Manager or Operations Manager
-
-
Root volume size
o The root volume must have enough space to contain system files, log files, and core
files.
Root volume default directories
o The root volume contains the/etc directory and the/home directory. The/etc
directory contains configurations files tht the storage system needs to operate.
The/home directory is a default location tyou can use to store data.
FlexVol options
Each volume you create must have specific attributes defined. When creating a volume, you must
specify the following options.
The volume name should consist only of digits, numbers, and underscores. It can be up to 255
characters. This refers to the protocols used to access data stored in this volume. This is a dropdown menu of all aggregates. An aggregate can contain multiple flexible volumes, so different
flexible volumes can have the same containing aggregate. Here you will enter the size of the volume.
Be sure to indicate whether the specified size is in K (kilobytes), M (megabytes), G (gigabytes), or T
(terabytes). The Snapshot reserve specifies a set percentage of volume space for Snapshot copies. By
default, the Snapshot reserve is 20%. Each flexible volume has a Space Guarantee attribute that
controls how its storage is managed in relation to its containing aggregate. Volume: Data ONTAP
pre-allocates space in the aggregate for the volume. The preallocated space cannot be allocated to
any other volume in that aggregate. "Space management for a flexible volume with space guarantee
of volume" is equivalent to a traditional volume or to all volumes in versions of Data ONTAP earlier
than 7.0. File: Data ONTAP pre-allocates space in the volume so that any file in the volume with its
space reservation enabled can be completely rewritten, even if its blocks are pinned for a snapshot.
None: Data ONTAP reserves no extra space for the volume. Writes to LUNs or files contained by the
volume may fail if the containing aggregate does not have enough available space to accommodate
the write.
Qtrees
Qtrees enable you to partition your volumes into smaller segments that you can manage
individually. There are no restrictions on how much disk space can be used by a qtree or how many
files can exist in a qtree. In general, qtrees are similar to flexible volumes. However, they have the
following key differences: Snapshot copies can be enabled or disabled for individual volumes but not
for individual qtrees. Qtrees do not support space reservations or space guarantees.
-
-
-
-
Quotas
o You can limit the amount of data used by a particular project by placing all of that
projects files into a qtree and applying a tree quota to the tree
Backups
o You can use qtrees to keep your backups more modular, to add flexibility to backup
schedules, or to limit the size of each backup to one tape.
Security style
o If you have a project that needs to use NTFS-style security because the members of
the project use windows files and applications, you can group the data for that
project in a qtree and set its security style to NTFS, without requiring that other
projects also use the same security style.
CIFS oplocks settings
o If you have a project using a database that requires CIFS opportunistic locks
(oplocks) to be off, you can set CIFS oplocks to off for that projects qtree. While
allowing other projects to retain CIFS oplocks.
Calculating usable disk space
https://learningcenter.netapp.com/content/public/production/wbt/STRSW-WBTD817FND_r1/resources/pdf/faq_DOT_disk_capcity_dm.pdf
When you are creating aggregates and volumes, it is important that you know the disk size you are
working with is not the actual amount of space you have for data. The amount of disk space
available for data can be calculated as follows: Data ONTAP first “right-sizes” every disk and then
reserves 10% of disk space for its own use. Aggregates at most fill the remaining 90% of disk space.
Each aggregate allocates five percent for the Snapshot copy reserve. From the remaining 95 percent
of the useable aggregate space, flexible volumes can be created. When you create a flexible volume,
you will have a maximum of 80 percent volume space available for data, with the remaining 20
percent of the volume space set aside for the Snapshot copy reserve. Note: In the earlier versions of
Data ONTAP, the default aggregate snapshot reserve value was 5%. With the new Data ONTAP 8.1 7Mode, the default aggregate Snapshot reserve value is set to 0%. This provides 100% usable
aggregate space. Click here to see the capacity numbers for several disks.
Module four
Creating a snapshot copy
NetApp Snapshot Technology is an integral part of Data ONTAP. Snapshot copies are “frozen” readonly views of the volume, and provide easy access to old versions of files, directories, and logical unit
numbers, or LUNs. Snapshot copies are your first line of defense for backing up and restoring data.
Snapshot Technology provides several benefits, including:  Up to 255 Snapshot copies per volume 
Read-only, static, and incorruptible backups  Consistent backups without disrupting most
applications  Minimal storage space consumption  Minimal effect on performance, and 
Configurable scheduling
A Snapshot copy is typically created instantly, regardless of volume size or level of activity occurring
on the storage system. Creation of a Snapshot copy normally includes these steps: 1. My file is made
up of disk blocks A, B, and C. Blocks A, B, and C are part of the active file system (AFS). The active file
system is captured in a Snapshot copy. 2. The snapshot points to blocks A, B, and C. Blocks A, B, and
C are now marked read-only. At this time the Snapshot copy does not take any extra space on disk.
3. Later, the C block of the file is modified; the changes are written in a new disk block C’. Myfile is
now made of disk blocks A, B, and C’. Notice that the Snapshot copy still points to disk blocks A, B,
and the original C. The Snapshot copy is now one block in size.
-
Netapp products based on snapshot technology
o Snaprestore
 Snaprestore allows an enterprise to recover almost instantly from
unplanned interruptions
o Snapmirror
 Snapmirror is a disaster recovery solution that mirrors data to a remote
location
o Snapvault
 Snapvault provides extended and centralized disk based backup for storage
system by backing up a snapshot copy to another location
Creating snapshot copies
There are two ways to create a Snapshot copy. Data ONTAP creates Snapshot copies automatically
using either a default schedule or a modified schedule to meet your needs. You can also manually
add a Snapshot copy using the command-line interface or a graphical user interface such as NetApp
System Manager or Protection Manager. When you click the camera icon, a drop-down menu
appears. Click Create to manually create a Snapshot copy. You will then see a pop-up to enter the
Snapshot copy name. If you select Configure, you can set up a schedule for automatic Snapshot
copies.
Configure snapshot schedule
When you select Configure from NetApp System Manager > Storage > Volumes and then the
Snapshot icon, a pop-up window appears.
Snapshot Reserve (%): This is the percentage of a volume that will be reserved for Snapshot copies.
Make Snapshot Directory (.snapshot) visible: This specifies whether the .snapshot directory is visible
on this volume at the client mountpoints. Enable Scheduled Snapshots: This enables scheduled
Snapshot copies to occur automatically. Number of Scheduled Snapshots to Keep: This enables
automatic Snapshot copies according to the Snapshot schedule. Clearing the check box disables
scheduled, automatic Snapshot copies. Hourly Snapshot Schedule: Hourly Snapshot Schedule
specifies the time Snapshot copies are made. Data ONTAP creates these Snapshot copies on the
hour or at specified hours, except if a weekly or nightly Snapshot copy is scheduled to occur at the
same time.
Snapshot copy access
To view the Snapshot directory, you will need to enable this feature in three areas. For CIFS users,
the Snapshot directory appears only at the root of shares. From the CIFS share, the “Show hidden
files and folders” option must be enabled. To confirm that this option is enabled, open My
Computer. From the window, select Tools > Folder Options > View. Confirm that Show hidden files
and folders is selected, then click OK.
Snaprestore
SnapRestore is a licensable product that allows you to revert corrupted data to the state it was in
when a particular Snapshot copy was taken. SnapRestore can revert a volume or a file. There are a
few benefits to using SnapRestore in conjunction with Snapshot copies: SnapRestore copies volumes
faster than you can do so manually, restores faster than you can restore from tape backup, and uses
less disk space than if you restore from Snapshot copies.
Module five
Network protocols: NAS and SAN
Bob explains how Data ONTAP can simultaneously support both network-attached storage, or NAS,
and a storage area network, or SAN. This means that you only need to learn Data ONTAP rather than
the operating systems from multiple vendors. In a NAS environment, servers are connected to a
storage system by a standard Ethernet network and use standard file access protocols, such as CIFS
and NFS. There are two types of SAN environments: The first is Fibre Channel SAN, or FC SAN, and is
based on Fibre Channel technology. The second, iSCSI, is based on TCP/IP networking with SCSI data
transfer standards.
-
-
Ethernet
o Ethernet is a family of frame base computer networking technologies for local area
networks (LANs). Ethernet is standardized as IEEE 802.3.
NAS (Protocols)
o
-
-
-
NAS protocols provide file level computer data storage connected to a tcp/ip
network providing data access to heterogenous network clients
SAN (blocks
o This is an architecture that attaches remote Netapp storage systems to servers in
such a way that the devices appear to be locally attached to the operating system
Corperate lan
o This stands for coporate local area network
Fibre channel
o This is a netowrk switch that is compatible with the fibre channel (FC) protocol. It
allows for the creation of a fibre channel fabric that is currently the core component
of most SANS
iSCSI
o iSCSI stands for internet small computer system interface, and is an ip-based storage
network standard for linking data storage over tcp/ip network
data access
Data ONTAP provides an infrastructure to manage files and user accounts. This infrastructure
includes the mapping of read and write permissions across the following protocols: NFS, CIFS, HTTP
and FTP. For a corporation like JAF, this means that employees can access their files from the
environments where they are most comfortable, and it is easy for engineers using UNIX workstations
to share files with marketing employees using Microsoft Windows.
NFS: Network File Service Protocol allows UNIX and PC NFS clients to mount file systems to local
mountpoints. The storage system supports NFS version 2, NFS version 3, NFS version 4, and NFS over
UDP and TCP. CIFS: Common Internet File System supports Windows 2000, Windows for
Workgroups, and Windows NT 4.0. HTTP: Hypertext Transfer Protocol enables Web browsers to
display files that are stored on the storage system. FTP: File Transfer Protocol enables UNIX clients to
remotely transfer files to and from the storage system.
Host files
A storage system may have multiple network interfaces. Each network interface is assigned a unique
IP address. Each IP address is resolved to a recognizable host name in a process referred to as host
name resolution. A host name must correspond to an IP address. This correspondence is mapped
using a file located in the /etc directory in a file called host. With host names, users can access a
storage system without needing to know specific network details, such as the IP address of the
storage system.
Virtual local area network
A VLAN is a switched network created on an Ethernet adapter. VLANs can be logically segmented
without regard for the physical locations of the users. For example, you can group VLANs by
function, applications, or even by project team.
Some of the advantages of creating a VLAN include ease of administration across multiple networks,
the ability to confine broadcast domains, and a reduction of network traffic. VLANs are created to
provide the services traditionally provided by routers in LAN configurations. VLANs address issues
such as scalability, security, and network management by allowing network administrators to define
small virtual networks within a larger physical network. Routers in VLAN topologies provide
broadcast filtering, security, and traffic flow management.
Interface groups
Data ONTAP connects with networks through physical interfaces, or links. The most common
interface is an Ethernet port, such as e0a, e0b, e0c, or e0d. Data ONTAP has supported IEEE 802.3ad
link aggregation for some time now. This standard allows multiple network interfaces to be
combined into one interface group. In the example, e0a, e0b, e0c, and e0d have been aggregated or
combined into one interface group called grp1. After being created, this group is indistinguishable
from a physical network interface. This feature is referred to as a interface group.
Interface groups provide several advantages over individual network interfaces, such as:  Higher
throughput for clients. Clients can refer to multiple interfaces using one name while benefiting from
the throughput of multiple interfaces.  Fault tolerance. If one interface fails within the Interface
groups, your storage system can still stay connected to the network without reconfiguring clients. 
No single point of failure. If the physical interfaces in the Interface group are connected to different
switches, as in this example, then if one switch goes down, your storage system will remain
connected to the network through the other switch.
There are three types of interface groups you can create: single-mode, multimode (static), or
multimode (dynamic). In the single-mode interface group, only one of the interfaces in the interface
group is active. The other interfaces are on standby, ready to take over if the active interface fails. All
interfaces in a single-mode interface group share a common MAC address. Called simply "multi" in
the interface group command, the multimode static interface group implementation complies with
the IEEE 802.3ad static standard, while a multimode dynamic interface group is compliant with the
IEEE 802.3ad dynamic standard, also called Link Aggregation Control Protocol, or LACP. Dynamic
multimode interface groups can detect not only the loss of link status, but also a loss of data flow.
However, a compatible switch must be used to implement the dynamic multimode configuration. In
multimode interface groups, all interfaces in the interface group are active and share a single MAC
address. This logical aggregation of interfaces provides higher throughput than a single-mode
interface group. Several load-balancing options are available to distribute traffic among the
interfaces of a multimode interface group. Please see the Data ONTAP 7.3 Administration course and
Data ONTAP 8.2 7-Mode Network Management Guide for more information on load-balancing
techniques available for multimode interface groups. Do not mix interfaces of different speeds or
media in the same multimode interface group.
Second level interface group
It is possible to nest interface groups. A storage administrator can create two multimode static
interface groups such as interface group1 and interface group2 in this diagram. Then we will assume
that you wish to configure interface group2 as standby if interface group1 fails. The storage
administrator can configure interface group3 as a single-mode interface group that uses primarily
interface group1 or interface group2 (chosen at random) and allow the other interface group to be
the standby interface. NOTE: LACP or multimode dynamic interface groups cannot participate in a
secondlevel interface group.
Finally, interface groups can provide network reliability to an active-active controller configuration.
In an active-active controller configuration, a pair of storage systems are configured so that the
storage can be accessed even if one of the storage systems fails. With a second-level interface group
connected in a single-mode configuration, storage connectivity can be maintained even if one of the
switches fails, thereby preventing excessive failovers. Therefore, by using an active-active controller
configuration along with second-level interface groups, a fully redundant storage system
connectivity architecture can be created. In this example, first-level 1 in second-level 1 connects
storage system 1 to the network through switch 1. First-level 2 in second-level 1 connects storage
system 1 to the network through switch 2. First-level 3 in second-level 2 connects storage system 2
to the network through switch 1. First-level 4 in second-level 2 connects storage system 2 to the
network through switch 2. All first-level interface groups can be configured as either single-mode or
multimode. Because second-level 1 and second-level 2 are both single-mode interface groups,
firstlevel 2 and first-level 3 are in standby mode, and first-level 1 and first-level 4 are the primary
connections for their respective storage systems. If one of the switches fails, the following happens:
 If switch 1 fails, first-level 2 and first-level 4 maintain the connection for their storage systems
through switch 2.  If switch 2 fails, first-level 1 and first-level 3 maintain the connection for their
storage system through switch 1.This configuration, therefore, reduces the number of events that
require a storage system takeover.
Storage Area network
The purpose of a SAN is to provide a host access to storage in the same way it might access a local
hard drive. NetApp SANs support three protocols: Fibre Channel, or FC, Fibre Channel Over Ethernet,
or FCoE, and iSCSI. There are three parts of a SAN:
 The host, or initiator: The host initiates the read and write request.
 The fabric or network.
 The storage system, or target: The target receives the read and write request from the initiator
-
-
-
-
switch fabric 1
o an fc switch is a network switch compatible with FC. It is used to create an FC fabric,
which is a network of FC devices. A second fabric can be created for redundancy.
Controller
o A controller is the component of a storage system that runs the data ontap
operating system and controls its disk subsystem
FC HBA
o An FC HBA is the fibre channel host bus adapter that connects the nde to the switch
or to the disks.
LUN
o A LUN is a logical unit number
In a Fibre Channel SAN, storage systems and hosts have host bus adapters, or HBAs, so they can be
connected directly to each other or to the FC switches. In a iSCSI SAN, storage systems and hosts
have either HBAs like a Fibre Channel SAN or standard Ethernet adapters with a software initiator
driver. Each node in iSCSI is referred to as a worldwide node, or WWN. For iSCSI SAN, WWNs are
used in an igroup to control access to specific LUNs
Luns and igroups
Now that you are more familiar with SAN, you want to begin working with it. The first step is to
create LUNs, the logical units that will contain the data on the storage system. Bob tells you that
working with SAN on NetApp is easy. He will begin by showing you how to create a LUN and then
how to access the LUN from the initiator.
Creating a LUN
-
-
Name
o The LUN name must start with a letter or an underscore if you choose to
automatically create a volume. The designated volume name is truncated to 249
characters if the LUN name is longer than 249 characters. Any hyphen, left brace,
right brace, or period in the LUN name will be replaced with an underscore in the
volume.
Description
o
-
-
Type
o
The LUN description is an optional attribute that you can use to specify additional
about the LUN
The type specifies that contents of the LUN. You can select from solaris, linux,
windows, HP-UX, NetWare, AIX, Hyper-V, OpenVMS, and VMware
Size
o
-
-
The size of a LUN must be specified as an integer. You can select from AI, HP-UX,
Hpyer-V, Linux, netware, openvms, solaris EFI, VMware, Windows, Windows GPT,
windows 2008, and Xen.
LUN container
o You can allow Netapp system manager to create a new FlexVol volume container for
this LUN, or you can specify a Flexvol volume or qtree container for this volume.
Initiator mapping
o Specifies which initiators can have an access to a LUN.
The purpose of SnapDrive
SnapDrive is an enterprise-class storage and data management solution available for Windows and
several UNIX platforms. SnapDrive simplifies the mapping and managing of NetApp storage to serve
data access to both IP and FC SAN infrastructure, as well as network-attached storage, or NAS,
protocols such as CIFS and NFS. Key SnapDrive functionality includes error-free application storage
provisioning, consistent data Snapshot copies, rapid application recovery, and the ability to easily
manage data with its server-centric approach. SnapDrive enables you to create and manage your
LUNs, making the storage available as local disks. SnapDrive software virtualizes and enhances
storage management by:
 Expanding storage dynamically, with no downtime
 Backing up and restoring data with integrated Snapshot technology
 Cloning and replicating production data onlineSnapDrive is available for both Windows and UNIX®
environments.
Creating, managing, and writing to LUNS on the storage system
Different types of network connections are used for creating and managing LUNs on the storage
system, compared to writing to the LUNs from the initiator host. SnapDrive uses Storage
Management for Data ONTAP over Ethernet to create and manage LUNs on the storage system from
the SnapDrive host. To enable the Windows host to write to a LUN created on the storage system,
you need to have either an iSCSI, FCoE or Fibre Channel SAN connection between the initiator host
and the target. Fibre Channel HBAs and iSCSI hardware initiators are managed in Windows using the
Windows Hardware Device Manager. Once the Fibre Channel or iSCSI HBA is installed, enabled, and
connected to the storage system through the Fibre Channel fabric or the Ethernet switch, SnapDrive
hosts will be able to write to LUNs on the storage system.
Review: NAS
Administrators face a variety of problems with direct-attached storage, or DAS, such as inefficient
data allocation, uncertain data availability, and an inability to guarantee that users are backing up
data. Using NetApp® network-attached storage, or NAS, can help by offering solutions for a variety
of workloads:
 Enterprise and database applications
 Windows® and UNIX® hosting of home directories
 Application and server virtualization NetApp solutions support multiple file access protocols, such
as NFS and CIFS, so you can consolidate file storage across your UNIX and Windows environments.
There are three parts of a NAS:
 The client or host system which sends read and write requests
 The TCP/IP network
 The server or storage system which receives the read and write
-
NFS
-
o
CIFS
Network file system, is a file system for UNIX clients.
o
The common internet file system protocol is used to share files. CIFS is the method
of transport for windows shares.
NFS-UNIX
NFS is a UNIX-Based file system protocol that allows networked computers to share files. Both the
NFS and CIFS licenses are installed by NetApp. However, if you purchase your license after receiving
your storage system, use NetApp System Manager to activate the license. To access your storage
system resource, export the resource, such as a volume or qtree, from the storage system using the
wizard.
You can export and unexport file system paths on your storage system. Doing so makes them
available or unavailable, respectively, for mounting by NFS clients, including PC NFS and Web NFS
clients. The table shown includes information you will need to export your resource using NetApp
System Manager.
Export Name: This specifies the name of the NFS export.
Export Path: This specifies the location of the NFS export.
Anonymous User ID: This specifies the user ID (UID) for the root user on the client. The default value
is 65534.
The branchCahce Feature
The BranchCache feature is another important improvement in Server Message Block 2.1 that is
supported in Data ONTAP 8.1.1 operating in 7-Mode. The diagram on this slide illustrates the
problem that the BranchCache feature resolves. In this example, multiple clients in a remote office
location access data from a central office. A high-latency, low-bandwidth WAN connects the central
office to the remote office (also called a branch office). A low-latency, high-bandwidth LAN connects
all of the client machines in the branch office. Data is read by a client in the branch office from a
server in the central office across the WAN connection. A second client accesses the same central
office server and reads the same file across the WAN connection.
BranchCache Operting Modes
The BranchCache feature in the Windows 7 and Windows Server 2008 R2 operating systems can
increase the network responsiveness of centralized applications that are accessed from remote
offices, which gives users in remote offices an experience like that of working on a LAN. The
BranchCache feature also reduces WAN use. The BranchCache feature can operate in two modes:
distributed caching (which uses a peer-to-peer caching method) or hosted caching (which uses a
local centralized server caching method). When the BranchCache feature is enabled, a copy of data
that is accessed from the intranet web and file servers is cached locally within the branch office.
When another client on the same network requests the same file, the client downloads the file from
the local cache, rather than across the WAN.
Enabling the BranchCache Feature
The BranchCache feature is enabled and disabled at the vFiler unit level. After the feature is enabled
on a vFiler unit, the administrator must enable the feature at the share level by using a new share
property: -branchcache.
About qtrees
Bob reviews how qtrees enable you to partition your volumes into smaller segments that you can
manage individually. There are no restrictions on how much disk space can be used by a qtree or
how many files can exist in the qtree. Qtrees enable you to partition your FlexVol volumes into
smaller segments that you can manage individually. In general, qtrees are similar to FlexVol volumes.
However, they have the following key differences:
 Snapshot copies can be enabled or disabled for individual volumes, but not for individual qtrees.
 Qtrees do not support space reservations or space guaranteesRoll over each item to see reasons
for using qtrees.
Quotas: You can limit the amount of data used by a particular project by placing all of that project's
files into a qtree and applying a tree quota to the qtree.
Backups: You can use qtrees to keep your backups more modular, to add flexibility to backup
schedules, or to limit the size of each backup to one tape.
Security style: If you have a project where the members of the project use Windows files and
applications, it is best to create a qtree with the security style NTFS. This allows you to create other
qtrees with different security styles in the same volumes.
CIFS oplocks settings: If you have a project using a database that requires CIFS opportunistic locks,
known as oplocks, to be off, you can set CIFS oplocks to Off for that project's qtree, while allowing
other projects to retain CIFS oplocks.
Every qtree and volume has a security style setting. This setting determines whether files in a qtree
can use Windows NT or UNIX security. Each file defaults to the security style most recently used to
set permissions.
NTFS (Windows): Files and directories have Windows NT file-level permission settings. To use NTFS
security, the storage system must be licensed for CIFS. If the change is from a mixed qtree, Windows
NT permissions determine file access for a file that had Windows NT permissions. Otherwise, UNIXstyle permission bits determine file access for files created before the change.
UNIX: Files and directories have UNIX permissions. The storage system disregards any Windows NT
permissions established previously.
Mixed: A file or directory can have either Windows NT or UNIX permissions. If NTFS permissions on a
file are changed, the storage system recomputes UNIX permissions on that file. If UNIX permissions
or ownership on a file are changed, the storage system deletes any NTFS permissions on that file.
Quotas
Quota rules
Bob explains that you can also specify quotas in the /etc/quotas file. It may look a little intimidating,
but editing it is simple. You simply map or mount the /vol/vol0 share and use the operating system
editor. There are a few things to keep in mind:
 New users or groups created after the default quota is in effect will have the default value.
 Users or groups that do not have a specific quota defined will have the default value.
Quota Targets and Types: Quota types include user, group, and tree (for qtree). If the type is user,
the first column, Target, will specify a user name or * to indicate all users. If the type is group, then
the target column will have the name or UID for the group. Quotas are based on a Windows account
name, UNIX ID, or group identification number (GID) in both NFS and CIFS environments. Tree
quotas do not require UIDs or GIDs. If you implement only tree quotas, it is not necessary to
maintain the /etc/passwd and /etc/group files (or NIS services)
Disk Column: The Disk field lists the maximum disk space allocated to the quota target. This hard
limit cannot be exceeded: if the limit is reached, messages are sent to the user and console and
SNMP traps are created. Use abbreviations (G, M, or K) for gigabytes, megabytes, and kilobytes. You
can enter either uppercase or lowercase letters. If you omit the letter, the system assumes K
(kilobytes). Do not leave this field blank: enter a hyphen (-) to track usage without imposing a limit.
Files Column: The Files field specifies the maximum number of files the quota target can use. A blank
or a hyphen (-) in this field indicates that the number of files is not part of the quota and is to be
tracked only. You can omit abbreviations (uppercase or lowercase) and you can enter an absolute
value, such as 15000.
Module 8
Accessing the storage system console
Bob explains that each storage system has an ASCII terminal console that enables you to monitor the
boot process, helps you to configure the appliance, and allows you to perform hardware
maintenance, if necessary. Once you identify the console port on the storage system, you will need
to connect it to a console server or a system like your laptop. If you are using a laptop, you will want
to configure the laptop. Once the cable is connected from a Windows system, you can use
HyperTerminal to connect. Use the following communication parameters to communicate with the
storage system.
Command Syntax
-
Vol
o Use the vol command to manage volumes, display volume status, and copy a volume
Create
o The create subcommand creates a new volume
Marcom_hq
o The marcom_hq argument specifies the volume name
Aggr1 50g
o This argument specifies that the volume is to be created in the aggregate called
aggr1 with an initial size of 50 gigabytes
Users, Roles, and groups
-
-
Capabilities
o Is a privilege granted to a role, examples include login and security rights. CLI and
API rights.
Roles
o A role is a set of capabilities that can be assigned to a group. Roles for administrative
accounts include
 Root: Grants all possible capabilities
 Admin: Grants all CLI, application programing interface, API, login and
security capabilities.
 Power: Grants the ability to make CIFS and NFS API cals

-
-
None: Grants no administrative capabilities
Groups
o A group is a collection of users that can be granted one or more roles. Groups can be
predefined, created, or modified. Groups and users must have unique names
Users
o A user is an account that is authenticated on the storage system. User accounts can
be placed in groups. Roles and capabilities cannot be assigned to users directly.
Module 9
Storage system maintenance
Auto support (ASUP)
Data ONTAP has a built-in “call home” feature referred to as AutoSupport. AutoSupport is an alerting
and data collection mechanism that sends messages back to NetApp when potential storage system
problems are detected. AutoSupport works in the background and is transparent to the end-user. In
addition, weekly messages are automatically sent from your storage system to the NetApp support
team
Message logging
You can monitor the status and operation of managed storage systems by using the Event
Management System, or EMS, output in syslog. Events are generated automatically when a
predefined condition occurs or when an object crosses a threshold. When an event occurs, status
alert messages may be generated as a result of the event. The EMS collects event data from various
parts of the Data ONTAP kernel and provides a set of filtering and event forwarding mechanisms. By
default, all system messages are sent to the console and logged in a message file. Messages can be
sent to the console, a file, or a remote system. The message file can be accessed from an NFS or CIFS
client, as well as from the NetApp System Manager
Degraded mode
When a disk is reconstructing, Data ONTAP will calculate any lost data using parity and serve it to the
client. The client will not experience any loss in service while a spare is being reconstructed.
Degraded mode occurs when there is either a single disk failure in a RAID 4 group, or a double disk
failure in a RAID-DP group, and there are no spares available. Your storage system will operate in this
mode for 24 hours by default. Data ONTAP will continue to serve data using parity, but it is running
at a risk. To reduce the risk, Data ONTAP will shut down the system after 24 hours.
-
-
-
-
Can users access data when a disk is reconstructing?
o When a disk is reconstructing, data ontap will calculate any lost data using parity
and serve it to the client
What causes degraded mode to occur?
o Degraded mode occurs when there is either a single disk failure in a raid4 group, or a
double disk failure in a raid dp group, and there are no spares available
How long can a system operate in degraded mode?
o Your storage system will operate in this mode for 24 hours by default. If the failed
disk is not replaced within 24 hours, the storage system shuts down
Does data ontap continue to serve data while in degraded mode?
o
Data ontap will continue to server data using parity, but it is running at a risk. To
reduce the risk, data ontap will shut down the system after 24 hours.
Responding to degraded mode
When a system goes into degraded mode, an e-mail is sent to the system administrator and NetApp
Global Support. At that time, it is necessary to physically replace a disk. A new disk will arrive within
24 hours. It is highly recommended that you contact the NetApp Global Support team to help you
with this process.
Disk scrubbing
Disk scrubbing checks the blocks of all disks for media errors and parity consistency. If an error is
found, the disk will correct the error using parity. Disk scrubbing is initiated in two ways:
automatically or manually
 By default, the automatic option is on and disk scrubbing will start at 1:00 a.m. Sunday morning
and end six hours later at 7:00 a.m
 You can initiate a manual scrub at any time from the CLI.
Module 10
Storage space management
Space savings settings for volumes
Each flexible volume has a space guarantee attribute that controls how its storage is managed in
relation to its containing aggregate. Space guarantees on a FlexVol volume ensure that writes to a
specified FlexVol volume, or writes to files with space reservations enabled, do not fail because of
lack of available space in the containing aggregate. Volumes can be full-provisioned volumes,
meaning that space guarantees are set, or thin-provisioned volumes. Full-provisioned volumes
require administrators to reserve space within the aggregate. This can be simpler to manage,
because it means that you are guaranteed to have the room in the volumes that is expected. This is
the default setting. Thin-provisioned volumes None. This means that you can claim the volume is
bigger than its actual size. That means that users may believe the volume to be bigger than the
amount of space available. For administrators, it means that volumes must be monitored, so that
physical space can be added to the volume before the users actually need the space. This makes
thin-provisioned volumes more complex to manage.
Volume (Default): This guarantees that there will be enough data blocks available in the containing
aggregate to meet the entire flexible volume’s needs. File: File guarantees that there will be enough
blocks in the containing aggregate to meet the needs of the specified files in the flexible volume.
None: This provides no guarantee that there will be enough blocks in the containing aggregate to
meet the flexible volume’s needs
Volume space
Available Snapshot™ Reserve: The Available Snapshot Reserve illustrates in graph format the volume
space available for Snapshot copies.
Volume: The Volume area provides the total capacity of the volume and the space reserved for
Snapshot copies.
Available: The Available area provides the amount of space that is currently available on the volume
for data and for Snapshot copies, as well as the total space currently available on the volume.
Used: The Used area provides the amount of space on the volume that has been used for data and
for Snapshot copies and the total volume space that has been used.
Monitoring the volume space for the CLI
The df command for disk full or disk free space will show you information about an aggregate or a
volume. You can also use the aggr show_space command. This command displays the space usage in
an aggregate. Unlike df, this command shows the space usage for each flexible volume within an
aggregate. If the aggregate name is specified, aggr show_space only runs on the corresponding
aggregate; otherwise it reports space usage on all the aggregates. Bob reminds you that all sizes are
reported in 1024-byte blocks, unless otherwise requested by one of the -h, -k, -m, -g, or -t options.
The -k, -m, -g, and -t options scale each size-related field of the output to be expressed in kilobytes,
megabytes, gigabytes, or terabytes respectively.
You can also use the -h option with the aggr show_space command. Try the command again using
the -h option. Notice that the output shows you the used and available space and the volume space
reserved for Snapshot copies just as NetApp System Manager showed you. You can use these same
commands to look at volume information.
Try using the df command to see information about a volume.
Improving storage utilization
NetApp deduplication technology is a core feature of Data ONTAP that helps administrators utilize
disk space more effectively. Deduplication works by eliminating copies of the same file. It runs as a
background process and is transparent to clients. The space saving ratios vary based on the type of
data, such as backups or normal data, but administrators can expect to see a gain of up to 20% for
backup data and up to 40% for normal data. To estimate your own savings, please visit
http://www.dedupecalc.com.
Deduplication in action
NetApp deduplication technology helps administrators utilize disk space more effectively.
Deduplication works by removing the duplicate data blocks in the WAFL® (Write Anywhere File
Layout) file system. Essentially, deduplication stores only unique blocks in the flexible volume and
creates a small amount of additional metadata in the process. This diagram shows an example of a
file called presentation.ppt. At some point, the owner of the file makes a copy of the file and saves it
in a new folder. The files now take up twice the space on the disk. The copy is edited and 10 new
blocks of data are added to the file. NetApp deduplication scans the file system and finds the
identical blocks. It is able to eliminate the duplicate blocks by creating pointers to a single version of
the blocks. This allows the two versions of the file that would take up to 70 blocks to be stored with
only 30 blocks. That is a savings of 40 blocks of data. Because, in real life, files are much larger than
our example, and because deduplication can find and eliminate multiple versions of the same file,
deduplication provides savings of up to 20% on nonbackup data and up to 40% disk savings on
backup data.
How Netapp deduplication works
Data ONTAP creates checksums, or fingerprints, for each data block as it is being written. When the
deduplication process is initiated, it gathers all of the fingerprints into the fingerprint database. It
also creates a change log file to track all blocks changed or written in the volume. Next,
deduplication sorts fingerprint records and looks for data blocks with the same fingerprints. When
duplicate fingerprints are found, the deduplication process compares those data blocks byte for byte
to ensure the data is identical. When the duplicate blocks are verified, the index file pointers, also
known as inodes, of the duplicate data blocks are changed to point to one common data block and
the duplicate blocks are returned to the free block pool.
Virtual Lab
Download