Uploaded by Theo

Textbook

advertisement
Terms of Service and End User License
Agreement (“EULA”)
The following terms of service and end user license agreement (“EULA”) contain important terms
and conditions of the license you as a User (defined below) are being granted by Arvato Supply
Chain Solutions SE (“Company”) to access Skillpipe (as defined below) for the purpose of accessing
training courses (“Content”).
Please read this EULA carefully before accessing the Content.
1. Grant of Use. Subject to the terms of this EULA, Company grants to User the right to
access and use this course and the associated Content solely for its personal training
and education (“License”). “User” means a person that has complied with any
registration requirements reasonably required by Company and has been issued a
personal and unique credentials. Company and the author of the Content retain sole and
exclusive ownership of, and all rights, title, and interest in, the Content, including,
without limitation (a) any copyright, trademark, patent, trade secret, or other intellectual
property embodied or associated therein, and (b) all derivative works and copies
thereof. Except as expressly provided, nothing in this EULA shall be construed to convey
any intellectual property rights to User.
2. Acceptable Use. User shall use the Content exclusively for authorized and legal
purposes, consistent with all applicable laws and regulations. Company may suspend or
terminate any User’s access to the Content in the event that Company reasonably
determines that such User has violated the terms and conditions of this License.
3. R estrictions. User shall not itself, or through any third party (i) sell, resell, distribute,
host, lease, rent, license or sublicense, in whole or in part, the Content or access
thereto; (ii) decipher, decompile, disassemble, reverse assemble, modify, translate,
reverse engineer or otherwise attempt to derive source code, algorithms, tags,
specifications, architecture, structure or other elements of the Content, in whole or in
part, for competitive purposes or otherwise; (iii) allow access to, provide, divulge or
make available the Content to any third party; (iv) write or develop any derivative works
based upon the Content; or modify, adapt, translate or otherwise make any changes to
the Content or any part thereof; or (v) remove from any portion of the Content any
identification, patent, copyright, trademark or other notices.
4. Term and Termination. This EULA shall remain in effect for as long as User maintains
an account. The License to any Content shall be limited to the time that such Content is
available to User. User may terminate this EULA at any time by sending written notice to
info@waypoint.ws. Company may terminate this EULA if in its sole discretion Company
determines that User has violated the terms of this EULA or the License. Upon
termination, User shall have no further rights to access or otherwise utilize the Content.
All terms of this EULA which naturally survive termination shall remain in full force and
effect after termination.
5. Limited W arranty. ALL CONTENT IS PROVIDED ON AN ‘AS IS AS AVAILABLE’ BASIS.
COMPANY, ITS LICENSORS, DATA CENTER AND SUPPLIERS EXPRESSLY DISCLAIM TO
THE MAXIMUM EXTENT PERMITTED BY LAW, ALL WARRANTIES, EXPRESSED OR
IMPLIED, ORAL OR WRITTEN, INCLUDING, WITHOUT LIMITATION, (i) ANY WARRANTY
THAT ANY SOFTWARE, DATABASE, CONTENT, DELIVERABLES OR PROFESSIONAL
SERVICES ARE ERROR- FREE, ACCURATE OR RELIABLE OR WILL OPERATE WITHOUT
INTERRUPTION OR THAT ALL ERRORS WILL BE CORRECTED OR WILL COMPLY WITH
ANY LAW, RULE OR REGULATION, (ii) ANY AND ALL IMPLIED WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NONINFRINGEMENT AND (iii) ANY AND ALL IMPLIED WARRANTIES ARISING FROM
STATUTE, COURSE OF DEALING, COURSE OF PERFORMANCE OR USAGE OF TRADE. NO
ADVICE, STATEMENT OR INFORMATION GIVEN BY COMPANY, ITS AFFILIATES,
CONTRACTORS OR EMPLOYEES SHALL CREATE OR CHANGE ANY WARRANTY
PROVIDED HEREIN. USER EXPRESSLY ACKNOWLEDGES AND AGREES THAT THE
CONTENT IS NOT DESIGNED OR INTENDED TO MEET ALL OF ITS OR ITS USERS’
TRAINING AND EDUCATIONAL NEEDS OR REQUIREMENTS, INCLUDING TRAINING AND
EDUCATION THAT IS REQUIRED UNDER APPLICABLE LAWS. USER ASSUMES ALL
RESPONSIBILITY FOR THE SELECTION OF THE SERVICES PROVIDED HEREUNDER TO
ACHIEVE ITS INTENDED RESULTS.
6. Limitation of Liability. TO THE FULLEST EXTENT PERMITTED BY LAW, COMPANY’S
TOTAL LIABILITY (INCLUDING ATTORNEYS’ FEES AWARDED UNDER THIS EULA) TO
USERS FOR ANY CLAIM BY USER OR ANY THIRD PARTIES UNDER THIS EULA, WILL BE
LIMITED TO THE FEES PAID FOR SUCH ITEMS THAT ARE THE SUBJECT MATTER OF
THE CLAIM. IN NO EVENT WILL ANY PARTY, ITS LICENSORS OR SUPPLIERS BE LIABLE
TO USER OR OTHER THIRD PARTIES FOR ANY INDIRECT, SPECIAL, INCIDENTAL,
EXEMPLARY PUNITIVE, TREBLE OR CONSEQUENTIAL DAMAGES (INCLUDING,
WITHOUT LIMITATION, LOSS OF BUSINESS, REVENUE, PROFITS, STAFF TIME,
GOODWILL, USE, DATA, OR OTHER ECONOMIC ADVANTAGE), WHETHER BASED ON
BREACH OF CONTRACT, BREACH OF WARRANTY, TORT (INCLUDING NEGLIGENCE),
PRODUCT LIABILITY OR OTHERWISE, WHETHER OR NOT PREVIOUSLY ADVISED OF
THE POSSIBILITY OF SUCH DAMAGES.
7. I ndem nification. User shall indemnify and hold Company, its affiliates, suppliers, data
center, employees and officers (an “Indemnified Party”) harmless from and against all
liability, claims, damages, fines, losses, expenses (including reasonable attorney's fees
and court costs, and the cost of enforcing this indemnity) suffered or incurred by
Company or any Indemnified Party arising out of, or in connection with (a) any act or
omission of User, (b) any material breach by User of any of the terms of this EULA; and
(c) any use or reliance by User of any portion of the Content, including all third-party
claims, causes of action, suits, and legal proceedings asserted against Company or an
Indemnified Party arising out of, or relating to, the use of or reliance by User on any
Content; provided, however, that User shall have no obligations under this section
related to a third-party claim that Skillpipe or the Content infringes such third party’s
intellectual property rights.
8. Governing Law and Venue. This EULA shall be governed by and construed in
accordance with the laws of the United States and United States courts shall have
exclusive jurisdiction.
9. N otices. Any notice required under this EULA shall be sent to User at the email
provided by User during registration, and to Company at info@waypoint.ws.
20740 Installation, Storage, and Compute with Windows Server
Copyright
Microsoft and the trademarks listed at https://www.microsoft.com/en-us/legal/intellectualproperty
/Trademarks/Usage/General.aspx are trademarks of the Microsoft group of companies. All other
trademarks are property of their respective owners.
Product number: 20740
Product name: Installation, Storage, and Compute with Windows Server
Version: 1.0
Released: 11/2022
1
20740 Installation, Storage, and Compute with Windows Server
Installation, Storage, and Compute
with Windows Server
Formerly 20740, available on Courseware Marketplace as 55382AC
About this course
This entry-level Windows Server course is designed for IT professionals who have minimal
experience with Windows Server and seek to advance their knowledge and skills. It is designed for
professionals who will be responsible for managing the installation of Windows Server as well as
storage and compute services, and who need to understand the scenarios, requirements, and
storage and compute options that are available and applicable to Windows Server.
By completing this course, you’ll achieve the knowledge and skills to:
•
Prepare and install Windows Server with Desktop Experience and as a Server Core installation
and plan a server upgrade and migration strategy.
•
Describe the various storage options, including partition table formats, basic and dynamic disks,
file systems, virtual hard disks, and drive hardware, and explain how to manage disks and
volumes.
•
Describe enterprise storage solutions and select the appropriate solution for a given situation.
•
Implement and manage Storage Spaces and Data Deduplication.
•
Install and configure Microsoft Hyper-V.
•
Deploy, configure, and manage Windows and Hyper-V containers.
•
Describe the high availability and disaster recovery technologies in Windows Server.
•
Plan, create, and manage a failover cluster.
•
Implement failover clustering for Hyper-V virtual machines.
•
Create and manage deployment images.
•
Manage, monitor, and maintain virtual machine installations.
2
20740 Installation, Storage, and Compute with Windows Server
Target audience
This course is intended for IT professionals who have some experience working with Windows
Server, and who are looking for a single five-day course that covers storage and compute
technologies in Windows Server. This course will help them update their knowledge and skills
related to storage and compute for Windows Server.
Candidates suitable for this course would be:
•
Windows Server administrators who are relatively new to Windows Server administration and
related technologies, and who want to learn more about the storage and compute features in
Windows Server.
•
IT professionals with general IT knowledge, who are looking to gain knowledge about Windows
Server, especially around storage and compute technologies in Windows Server.
Recommended prerequisites
Before attending this course, students must have:
•
A basic understanding of networking fundamentals.
•
An awareness and understanding of security best practices.
•
A basic understanding of virtualization
•
An understanding of basic AD DS concepts.
•
Basic knowledge of server hardware.
•
Experience supporting and configuring Windows client operating systems such as Windows 10
or later.
Additionally, students would benefit from having some previous Windows Server operating system
experience, such as experience as a Windows Server systems administrator.
3
20740 Installation, Storage, and Compute with Windows Server
Contents
Copyright ................................................................................................................................................1
Installation, Storage, and Compute with Windows Server ...................................................................2
About this course ...............................................................................................................................2
Target audience .................................................................................................................................3
Recommended prerequisites ............................................................................................................3
Contents .................................................................................................................................................4
Author biographies.............................................................................................................................. 19
Damir Dizdarevic, Subject Matter Expert and Content Developer ................................................ 19
Andrew Warren, Content Developer and Subject Matter Expert ................................................... 19
Course outline ..................................................................................................................................... 20
Module 1 Install, upgrade, and migrate servers and workloads................................................... 20
Lesson 1 Introducing Windows Server ...................................................................................... 20
Lesson 2 Prepare and install Server Core ................................................................................. 20
Lesson 3 Prepare for upgrades and migrations ........................................................................ 21
Lesson 4 Windows Server activation models ............................................................................ 21
Lab 1 Install and configure Windows Server ............................................................................. 22
Module 2 Configure local storage .................................................................................................. 22
Lesson 1 Manage disks in Windows Server .............................................................................. 22
Lesson 2 Manage volumes in Windows Server ......................................................................... 23
Lab 2 Manage disks and volumes in Windows Server .............................................................. 23
Module 3 Implement enterprise storage solutions ....................................................................... 24
Lesson 1 Overview of direct-attached storage, network-attached storage, and storage
area networks ............................................................................................................................. 24
Lesson 2 Compare Fibre Channel, iSCSI, and Fibre Channel over Ethernet ............................ 24
Lesson 3 Understanding iSNS, data center bridging, and MPIO .............................................. 25
Lesson 4 Configure sharing in Windows Server ........................................................................ 25
4
20740 Installation, Storage, and Compute with Windows Server
Lab 3 Plan and configure storage technologies and components ........................................... 25
Module 4 Implement Storage Spaces and Data Deduplication .................................................... 26
Lesson 1 Implement Storage Spaces ........................................................................................ 26
Lesson 2 Manage Storage Spaces ............................................................................................ 26
Lesson 3 Implement Data Deduplication .................................................................................. 27
Lab 4 Implement Storage Spaces.............................................................................................. 27
Lab 5 Implement Data Deduplication ........................................................................................ 27
Module 5 Install and configure Hyper-V and virtual machines ..................................................... 28
Lesson 1 Overview of Hyper-V .................................................................................................... 28
Lesson 2 Install Hyper-V ............................................................................................................. 28
Lesson 3 Configure storage on Hyper-V host servers ............................................................... 29
Lesson 4 Configure networking on Hyper-V host servers.......................................................... 29
Lesson 5 Configure Hyper-V VMs ............................................................................................... 29
Lesson 6 Manage Hyper-V VMs ................................................................................................. 30
Lab 6 Install and configure Hyper-V ........................................................................................... 30
Module 6 Deploy and manage Windows Server and Hyper-V containers ..................................... 31
Lesson 1 Overview of containers in Windows Server ................................................................ 31
Lesson 2 Prepare for containers deployment ........................................................................... 32
Lesson 3 Install, configure, and manage containers ................................................................ 32
Lab 7 Install and configure containers ...................................................................................... 32
Module 7 Overview of high availability and disaster recovery ...................................................... 33
Lesson 1 Define levels of availability......................................................................................... 33
Lesson 2 Plan high availability and disaster recovery solutions with Hyper-V VMs ................. 33
Lesson 3 Network Load Balancing overview ............................................................................. 34
Lesson 4 Back up and restore with Windows Server Backup ................................................... 34
Lesson 5 High availability with failover clustering in Windows Server ..................................... 34
Lab 8 Plan and implement a high-availability and disaster-recovery solution ......................... 35
5
20740 Installation, Storage, and Compute with Windows Server
Module 8 Implement and manage failover clustering .................................................................. 35
Lesson 1 Plan for a failover cluster ........................................................................................... 36
Lesson 2 Create and configure a new failover cluster .............................................................. 36
Lesson 3 Maintain a failover cluster ......................................................................................... 37
Lesson 4 Troubleshoot a failover cluster .................................................................................. 37
Lab 9 Implement a failover cluster ............................................................................................ 38
Lab 10 Manage a failover cluster .............................................................................................. 38
Module 9 Implement failover clustering for Hyper-V virtual machines ......................................... 39
Lesson 1 Overview of integrating Hyper-V in Windows Server with failover clustering ............ 39
Lesson 2 Implement and maintain Hyper-V VMs on failover clusters ...................................... 39
Lesson 3 Key features for VMs in a clustered environment ..................................................... 40
Lab 11 Implement failover clustering with Hyper-V .................................................................. 40
Module 10 Create and manage deployment images .................................................................... 41
Lesson 1 Introduction to deployment images ........................................................................... 41
Lesson 2 Create and manage deployment images by using the MDT ..................................... 41
Lesson 3 VM environments for different workloads ................................................................. 42
Lab 12 Use the MDT to deploy Windows Server ....................................................................... 42
Module 11 Maintain and monitor Windows Server installations .................................................. 42
Lesson 1 WSUS overview and deployment options .................................................................. 43
Lesson 2 Update management process with WSUS ................................................................. 43
Lesson 3 Overview of PowerShell Desired State Configuration ................................................ 43
Lesson 4 Overview of Windows Server monitoring tools........................................................... 44
Lesson 5 Use Performance Monitor .......................................................................................... 44
Lesson 6 Monitor Event Logs ..................................................................................................... 44
Lab 13 Implementing WSUS and deploying updates ................................................................ 45
Module 1: Install, upgrade, and migrate servers and workloads ...................................................... 46
Lesson 1: Introducing Windows Server.......................................................................................... 46
Overview of the Windows Server OS .......................................................................................... 47
6
20740 Installation, Storage, and Compute with Windows Server
How to select a suitable Windows Server edition ..................................................................... 48
Hardware requirements for Windows Server............................................................................. 50
Overview of the installation process and options for Windows Server ..................................... 50
How can you manage Windows Server remotely? ..................................................................... 52
Use Windows PowerShell to manage Windows Servers............................................................ 58
Windows Server updates and servicing channels ..................................................................... 60
Lesson 2: Prepare and install Server Core .................................................................................... 61
Server Core overview .................................................................................................................. 62
Plan for Server Core deployment ............................................................................................... 62
Demonstration: Install Server Core ............................................................................................ 63
Configure Server Core after installation .................................................................................... 64
Manage and service Server Core ............................................................................................... 64
Lesson 3: Prepare for upgrades and migrations ........................................................................... 67
In-place upgrades vs. server migration...................................................................................... 67
When to perform an in-place upgrade ....................................................................................... 69
Migration benefits ...................................................................................................................... 70
Migrate server roles and data within a domain......................................................................... 70
Migrate server roles across domains or forests ........................................................................ 72
Solution accelerators for migrating to the latest Windows Server edition ............................... 73
Considerations and recommendations for server consolidation .............................................. 75
Lesson 4: Windows Server activation models ............................................................................... 76
Licensing models overview ........................................................................................................ 76
What is Windows Server activation? .......................................................................................... 77
Lab 1: Install and configure Windows Server ................................................................................ 78
Knowledge check............................................................................................................................ 79
Learn more...................................................................................................................................... 79
7
20740 Installation, Storage, and Compute with Windows Server
Module 2: Configure local storage ..................................................................................................... 80
Lesson 1: Manage disks in Windows Server ................................................................................. 80
Select a partition table format ................................................................................................... 80
Select a disk type ....................................................................................................................... 82
Select a file system .................................................................................................................... 83
Implement ReFS ......................................................................................................................... 85
Demonstration: Configure ReFS ................................................................................................ 88
Use .vhd and .vhdx file types...................................................................................................... 88
Lesson 2: Manage volumes in Windows Server ............................................................................ 89
What are disk volumes? ............................................................................................................. 89
Options for managing volumes .................................................................................................. 90
Demonstration: Manage volumes .............................................................................................. 95
Extend and shrink a volume....................................................................................................... 95
What is RAID? ............................................................................................................................. 97
RAID levels overview .................................................................................................................. 98
Lab 2: Manage disks and volumes in Windows Server ................................................................. 99
Knowledge check............................................................................................................................ 99
Learn more...................................................................................................................................... 99
Module 3: Implement enterprise storage solutions ........................................................................ 100
Lesson 1: Overview of direct-attached storage, network-attached storage, and storage area
networks ....................................................................................................................................... 100
What is DAS? ............................................................................................................................ 100
What is NAS? ............................................................................................................................ 101
What’s a SAN? .......................................................................................................................... 103
Comparison and scenarios for usage ...................................................................................... 104
Block-level storage compared with file-level storage .............................................................. 106
8
20740 Installation, Storage, and Compute with Windows Server
Lesson 2: Compare Fibre Channel, iSCSI, and Fibre Channel over Ethernet ............................. 107
What is Fibre Channel? ............................................................................................................ 107
Considerations for implementing Fibre Channel ..................................................................... 108
What is iSCSI? .......................................................................................................................... 109
iSCSI components .................................................................................................................... 110
Considerations for implementing iSCSI ................................................................................... 111
iSCSI usage scenarios .............................................................................................................. 112
Demonstration: Configure an iSCSI target............................................................................... 113
Lesson 3: Understanding iSNS, data center bridging, and MPIO ............................................... 113
What is iSNS? ........................................................................................................................... 113
What is Data Center Bridging? ................................................................................................. 114
What is MPIO? .......................................................................................................................... 114
Demonstration: Configure MPIO .............................................................................................. 115
Lesson 4: Configure sharing in Windows Server ......................................................................... 115
What is SMB? ........................................................................................................................... 115
How to configure SMB shares .................................................................................................. 117
Demonstration: Configure SMB shares by using Server Manager and Windows PowerShell 121
What is NFS? ............................................................................................................................ 121
How to configure NFS shares ................................................................................................... 122
Demonstration: Configure an NFS share by using Server Manager ....................................... 122
Lab 3: Plan and configure storage technologies and components............................................. 123
Knowledge check.......................................................................................................................... 123
Module 4: Implement Storage Spaces and Data Deduplication ..................................................... 124
Lesson 1: Implement Storage Spaces ......................................................................................... 124
Enterprise storage needs ......................................................................................................... 124
What is the Storage Spaces feature? ...................................................................................... 125
Components and features of Storage Spaces ......................................................................... 127
Changes to file and storage services in Windows Server 2022 ............................................. 129
9
20740 Installation, Storage, and Compute with Windows Server
Storage Spaces usage scenarios ............................................................................................. 129
Provision Storage Spaces......................................................................................................... 131
Demonstration: Configure Storage Spaces ............................................................................. 132
Discussion: Compare Storage Spaces to other storage solutions .......................................... 132
Lesson 2: Manage Storage Spaces ............................................................................................. 132
Manage Storage Spaces .......................................................................................................... 133
Manage disk failure with Storage Spaces ............................................................................... 134
Storage pool expansion ............................................................................................................ 135
Demonstration: Manage Storage Spaces by using Windows PowerShell .............................. 136
Event logs and performance counters ..................................................................................... 137
Lesson 3: Implement Data Deduplication ................................................................................... 141
What is Data Deduplication? ................................................................................................... 141
Data Deduplication components ............................................................................................. 142
Deploy Data Deduplication ...................................................................................................... 144
Demonstration: Implement Data Deduplication...................................................................... 146
Usage scenarios for Data Deduplication ................................................................................. 146
Monitor and maintain Data Deduplication .............................................................................. 148
Backup and restore considerations with Data Deduplication ................................................ 150
Lab 4: Implement Storage Spaces ............................................................................................... 151
Lab 5: Implement Data Deduplication ......................................................................................... 151
Knowledge check.......................................................................................................................... 151
Module 5: Install and configure Hyper-V and virtual machines ...................................................... 152
Lesson 1: Overview of Hyper-V ..................................................................................................... 152
What is Hyper-V? ...................................................................................................................... 152
Manage Hyper-V with Hyper-V Manager .................................................................................. 154
Windows Server containers and Docker in Hyper-V ................................................................ 154
10
20740 Installation, Storage, and Compute with Windows Server
Lesson 2: Install Hyper-V .............................................................................................................. 156
Prerequisites and requirements for installing Hyper-V ........................................................... 156
Demonstration: Install the Hyper-V role ................................................................................... 157
Nested virtualization overview ................................................................................................. 157
Lesson 3: Configure storage on Hyper-V host servers................................................................. 158
Storage options in Hyper-V ....................................................................................................... 159
Considerations for VHD formats and types ............................................................................. 160
Fibre Channel support in Hyper-V ............................................................................................ 161
Where to store VHDs? .............................................................................................................. 161
Store VMs on SMB 3.0 shares ................................................................................................. 162
Demonstration: Manage storage in Hyper-V............................................................................ 162
Lesson 4: Configure networking on Hyper-V host servers ........................................................... 163
Types of Hyper-V virtual switches............................................................................................. 163
Demonstration: Configure Hyper-V networks .......................................................................... 164
Best practices for configuring Hyper-V virtual networks ......................................................... 164
Advanced networking features in Windows Server Hyper-V.................................................... 165
Lesson 5: Configure Hyper-V VMs ................................................................................................ 167
What are VM configuration versions? ...................................................................................... 167
VM generation versions............................................................................................................ 169
Demonstration: Create a VM .................................................................................................... 169
VM settings ............................................................................................................................... 170
The Hot Adding feature in Hyper-V ........................................................................................... 172
Shielded VMs ............................................................................................................................ 173
Best practices for configuring VMs .......................................................................................... 174
Lesson 6: Manage Hyper-V VMs................................................................................................... 174
Manage the VM state ............................................................................................................... 175
Manage checkpoints ................................................................................................................ 175
Demonstration: Create checkpoints ........................................................................................ 176
11
20740 Installation, Storage, and Compute with Windows Server
Import, export, and move VMs ................................................................................................. 176
PowerShell Direct overview ...................................................................................................... 177
Demonstration: Use PowerShell Direct.................................................................................... 178
Lab 6: Install and configure Hyper-V ............................................................................................ 178
Knowledge check.......................................................................................................................... 178
Module 6: Deploy and manage Windows Server and Hyper-V containers ...................................... 180
Lesson 1: Overview of containers in Windows Server ................................................................. 180
What are containers? ............................................................................................................... 181
Overview of Windows Server containers .................................................................................. 185
Overview of Hyper-V containers ............................................................................................... 186
Usage scenarios ....................................................................................................................... 187
Installation requirements for containers ................................................................................. 188
Lesson 2: Prepare for containers deployment ............................................................................ 189
Prepare Windows Server containers ........................................................................................ 190
Prepare Hyper-V containers ..................................................................................................... 190
Deploy package providers ........................................................................................................ 190
Lesson 3: Install, configure, and manage containers ................................................................. 191
What is Docker? ....................................................................................................................... 191
Docker components ................................................................................................................. 193
Usage scenarios ....................................................................................................................... 194
Demonstration: Deploy Docker Enterprise Edition and use Docker to pull an image ............ 195
Overview of management with Docker .................................................................................... 195
Overview of Docker Hub ........................................................................................................... 196
Docker with Azure ..................................................................................................................... 197
Demonstration: Deploy containers by using Docker ............................................................... 198
Lab 7: Install and configure containers ....................................................................................... 198
Knowledge check.......................................................................................................................... 198
12
20740 Installation, Storage, and Compute with Windows Server
Module 7: Overview of high availability and disaster recovery ....................................................... 199
Lesson 1: Define levels of availability .......................................................................................... 199
What is high availability?.......................................................................................................... 199
What is continuous availability? .............................................................................................. 200
What is business continuity? ................................................................................................... 201
Create a disaster-recovery plan ............................................................................................... 201
Highly available networking ..................................................................................................... 203
Highly available storage ........................................................................................................... 203
Highly available compute or hardware functions .................................................................... 204
Lesson 2: Plan high availability and disaster recovery with Hyper-V VMs .................................. 204
High availability considerations with Hyper-V VMs .................................................................. 205
Overview of live migration ........................................................................................................ 206
Live migration requirements .................................................................................................... 207
Demonstration: Configure live migration (optional) ................................................................ 208
Provide high availability with Storage Migration...................................................................... 208
Demonstration: Configure Storage Migration (optional) ......................................................... 209
Overview of Hyper-V Replica .................................................................................................... 209
Plan for Hyper-V Replica ........................................................................................................... 212
Implement Hyper-V Replica ...................................................................................................... 216
Lesson 3: Network Load Balancing overview .............................................................................. 221
What is Network Load Balancing? ........................................................................................... 222
Deployment requirements for NLB .......................................................................................... 223
Configuration options for NLB .................................................................................................. 223
Lesson 4: Back up and restore with Windows Server Backup .................................................... 224
Overview of Windows Server Backup ....................................................................................... 224
Implement backup and restore................................................................................................ 228
13
20740 Installation, Storage, and Compute with Windows Server
Lesson 5: High availability with failover clustering in Windows Server ...................................... 230
What is failover clustering? ...................................................................................................... 230
High availability with failover clustering .................................................................................. 231
Clustering terminology and key components .......................................................................... 232
Cluster quorum in Windows Server.......................................................................................... 235
Clustering roles......................................................................................................................... 237
Lab 8: Plan and implement a high-availability and disaster-recovery solution .......................... 238
Knowledge check.......................................................................................................................... 239
Module 8: Implement and manage failover clustering .................................................................... 240
Lesson 1: Plan for a failover cluster ............................................................................................ 240
Prepare to implement failover clustering ................................................................................ 240
Failover cluster storage ............................................................................................................ 242
Hardware requirements for a failover-cluster implementation ............................................... 243
Network requirements for a failover-cluster implementation ................................................. 244
Demonstration: Verify a network adapter's RSS and RDMA compatibility on an SMB server244
Infrastructure and software requirements for a failover cluster ............................................ 245
Security and AD DS considerations ......................................................................................... 245
Quorum in Windows Server 2022 ............................................................................................ 246
Plan for migrating and upgrading failover clusters ................................................................. 249
Plan for multi-site (stretched) clusters .................................................................................... 250
Lesson 2: Create and configure a new failover cluster ............................................................... 251
The validation wizard and the cluster support-policy requirements ....................................... 252
The process for creating a failover cluster .............................................................................. 253
Demonstration: Create a failover cluster and review the validation wizard ........................... 254
Configure roles ......................................................................................................................... 254
Demonstration: Create a general file-server failover cluster .................................................. 256
Manage failover clusters .......................................................................................................... 256
Configure cluster properties..................................................................................................... 257
14
20740 Installation, Storage, and Compute with Windows Server
Configure failover and failback ................................................................................................ 258
Configure and manage cluster storage ................................................................................... 259
Configure networking ............................................................................................................... 260
Configure quorum options........................................................................................................ 261
Demonstration: Configure the quorum .................................................................................... 261
Lesson 3: Maintain a failover cluster........................................................................................... 262
Monitor failover clusters .......................................................................................................... 262
Back up and restore failover-cluster configuration ................................................................. 263
Manage and troubleshoot failover clusters ............................................................................. 264
Manage cluster-network heartbeat traffic ............................................................................... 265
What is Cluster-Aware Updating? ............................................................................................. 267
Demonstration: Configure CAU ................................................................................................ 267
Lesson 4: Troubleshoot a failover cluster.................................................................................... 268
Communication issues overview.............................................................................................. 268
Repair the cluster name object in AD DS ................................................................................ 269
Start a cluster with no quorum ................................................................................................ 270
Demonstration: Review the Cluster.log file ............................................................................. 270
Monitor performance with failover clustering ......................................................................... 270
Windows PowerShell troubleshooting cmdlets........................................................................ 271
Lab 9: Implement a failover cluster ............................................................................................. 272
Lab 10: Manage a failover cluster ............................................................................................... 272
Knowledge check.......................................................................................................................... 272
Learn more.................................................................................................................................... 272
Module 9: Implement failover clustering for Hyper-V virtual machines .......................................... 273
Lesson 1: Overview of integrating Hyper-V in Windows Server with failover clustering ............. 273
Options for making applications and services highly available .............................................. 274
How does a failover cluster work with Hyper-V nodes? ........................................................... 276
15
20740 Installation, Storage, and Compute with Windows Server
Failover clustering features specifically for Hyper-V ............................................................... 277
Best practices for implementing high availability in a virtual environment............................ 277
Lesson 2: Implement and maintain Hyper-V VMs on failover clusters ....................................... 278
Components of Hyper-V clusters .............................................................................................. 279
Prerequisites for implementing Hyper-V failover clusters ....................................................... 279
Implement Hyper-V VMs on a failover cluster.......................................................................... 281
Configure Cluster Shared Volumes (CSVs) .............................................................................. 282
Configure a shared VHD ........................................................................................................... 283
Implement Scale-Out File Servers for VMs .............................................................................. 285
Considerations for implementing Hyper-V clusters ................................................................. 286
Maintain and monitor VMs in clusters ..................................................................................... 287
Demonstration: Implement failover clustering with Hyper-V................................................... 288
Lesson 3: Key features for VMs in a clustered environment ...................................................... 288
Overview of Network Health Protection ................................................................................... 289
Overview of actions taken on VMs when a host shuts down .................................................. 289
Overview of drain on shutdown ................................................................................................ 290
Demonstration: Configure drain on shutdown ........................................................................ 291
Lab 11: Implement failover clustering with Hyper-V.................................................................... 291
Knowledge check.......................................................................................................................... 291
Module 10: Create and manage deployment images ..................................................................... 292
Lesson 1: Introduction to deployment images ............................................................................ 292
Overview of images .................................................................................................................. 292
Overview of image-based installation tools ............................................................................. 295
Create, update, and maintain images ..................................................................................... 297
Windows ADK............................................................................................................................ 301
Windows Deployment Services ................................................................................................ 303
Microsoft Deployment Toolkit .................................................................................................. 305
Demonstration: Prepare a Windows Server 2022 image in the MDT..................................... 306
16
20740 Installation, Storage, and Compute with Windows Server
Lesson 2: Create and manage deployment images by using the MDT ....................................... 307
Create images in the MDT ........................................................................................................ 307
Deploy images in the MDT ....................................................................................................... 309
Lesson 3: VM environments for different workloads................................................................... 311
Evaluation factors..................................................................................................................... 312
Overview of virtualization accelerators .................................................................................... 313
Assessment features of the MAP toolkit.................................................................................. 314
Demonstration: Assess the computing environment by using the MAP toolkit ...................... 317
Design a solution for server virtualization ............................................................................... 318
Lab 13: Use the MDT to deploy Windows Server......................................................................... 318
Knowledge check.......................................................................................................................... 319
Module 11: Maintain and monitor Windows Server installations ................................................... 320
Lesson 1: WSUS overview and deployment options.................................................................... 320
What is WSUS? ......................................................................................................................... 320
WSUS server deployment options ............................................................................................ 321
The WSUS update-management process ................................................................................ 322
Server requirements for WSUS ................................................................................................ 322
Configure clients to use WSUS................................................................................................. 323
Lesson 2: Update management process with WSUS .................................................................. 323
WSUS administration ............................................................................................................... 324
What are computer groups? .................................................................................................... 324
Approve updates....................................................................................................................... 325
Configure automatic updates................................................................................................... 326
Demonstration: Deploy updates by using WSUS ..................................................................... 327
WSUS reporting ........................................................................................................................ 327
WSUS troubleshooting.............................................................................................................. 327
17
20740 Installation, Storage, and Compute with Windows Server
Lesson 3: Overview of PowerShell Desired State Configuration ................................................. 328
Benefits of Windows PowerShell DSC...................................................................................... 328
Requirements for Windows PowerShell DSC ........................................................................... 329
Implement Windows PowerShell DSC ...................................................................................... 330
Troubleshoot Windows PowerShell DSC .................................................................................. 331
Lesson 4: Overview of Windows Server monitoring tools ............................................................ 332
Overview of Task Manager ....................................................................................................... 332
Overview of Performance Monitor ........................................................................................... 333
Overview of Resource Monitor ................................................................................................. 334
Overview of Reliability Monitor................................................................................................. 335
Overview of Event Viewer ......................................................................................................... 335
Monitor a server with Server Manager .................................................................................... 337
Lesson 5: Use Performance Monitor ........................................................................................... 337
Overview of baseline, trends, and capacity planning .............................................................. 338
What are data collector sets? .................................................................................................. 342
Demonstration: Review performance with Performance Monitor........................................... 343
Monitor network infrastructure services.................................................................................. 343
Considerations for monitoring VMs ......................................................................................... 345
Lesson 6: Monitor Event Logs ...................................................................................................... 345
Use Server Manager to review event logs ............................................................................... 346
What is a custom view?............................................................................................................ 346
Demonstration: Create a custom view..................................................................................... 347
What are event-log subscriptions? .......................................................................................... 347
Demonstration: Configure an event subscription .................................................................... 348
Lab 13: Implement WSUS and deploy updates ........................................................................... 348
Knowledge check.......................................................................................................................... 348
18
20740 Installation, Storage, and Compute with Windows Server
Author biographies
Damir Dizdarevic, Subject Matter Expert
and Content Developer
Damir Dizdarevic is the CEO of the one of the largest IT companies in Bosnia and Herzegovina—
Logosoft d.o.o. Sarajevo. He has a B.Sc in Mathematics and is certified for Microsoft 365
Enterprise administration, Identity and Access, and the Cloud platform and infrastructure. Damir is
also a Microsoft Certified Trainer (MCT), and for more than 15 years, he’s specialized in cloud,
infrastructure, and identity solutions. In his 25-year career in IT, Damir has mostly worked as a
systems designer, project lead, and consultant.
Damir regularly writes courseware and very often speaks at various conferences about identity and
data protection, the Azure infrastructure as a service (IaaS), Microsoft 365, and related topics. He
is the founder and president of a Bosnian community of Microsoft users, administrators,
engineers, and developers (MSCommunity BiH). He is constantly graded highly as a speaker at
Microsoft conferences in Europe and at non-Microsoft IT conferences. So far, he has published
more than 400 technical articles for various IT magazines and portals. In the last few years, he has
actively authored courses about identities in Azure and about Microsoft 365. Damir has been a
Microsoft Most Valuable Professional (MVP) since 2007 and a Microsoft Regional Director since
2017. His blog is available at http://dizdarevic.ba/ddamirblog.
Andrew Warren, Content Developer and
Subject Matter Expert
Andrew Warren has more than 30 years of experience in the information technology (IT) industry,
many of which he has spent teaching and writing. He has been involved as a Subject Matter Expert
for many of the Windows Server 2016 courses, and the technical lead on many Windows 10
courses. He also has written a number of books for Microsoft Press on both Windows 10 and
Windows Server 2016. Based in the United Kingdom, Andrew runs his own IT training and
education consultancy.
19
20740 Installation, Storage, and Compute with Windows Server
Course outline
Module 1 Install, upgrade, and migrate
servers and workloads
This module describes the features of Windows Server from 2016 onward, with a focus on
Windows Server 2022, and explains how to prepare for and install Windows Server, with graphics
UI option and as a Server Core installation. This module also describes how to plan a server
upgrade and migration strategy and explains how to perform a migration of server roles and
workloads within and across domains. Finally, this module explains how to choose an activation
model based on your environment characteristics.
Lesson 1 Introducing Windows Server
•
Overview of the Windows Server OS
•
How to select a suitable Windows Server edition
•
Hardware requirements for Windows Server
•
Overview of the installation process and options for Windows Server
•
How can you manage Windows Server remotely?
•
Use Windows PowerShell to manage Windows Servers
•
Windows Server updates and servicing channels
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe Windows Server OS.
•
Explain how to select a suitable Windows Server edition.
•
Describe hardware requirements for Windows Server.
•
Describe installation options for Windows Server.
•
Explain how to manage Windows Server remotely.
•
Explain how to use Windows PowerShell to manage Windows Servers.
•
Describe Windows Server servicing channels.
Lesson 2 Prepare and install Server Core
•
Server Core overview
•
Plan for Server core deployment
•
Demonstration: Install Server Core
20
20740 Installation, Storage, and Compute with Windows Server
•
Configure Server Core after installation
•
Manage and service Server Core
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe Server Core benefits.
•
Plan for Server core deployment.
•
Configure Server Core after installation.
•
Manage and update Server Core.
Lesson 3 Prepare for upgrades and migrations
•
In-place upgrades vs. server migration
•
When to perform an in-place upgrade?
•
Migration benefits
•
Migrate server roles within a domain
•
Migrate server roles across domains or forests
•
Solution accelerators for migrating to the latest Windows Server edition
•
Considerations and recommendations for server consolidation
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe in-place upgrades and server migration.
•
Explain when to perform an in-place upgrade.
•
Describe migration benefits.
•
Migrate server roles and data within a domain.
•
Migrate server roles across domains or forests.
•
Describe solution accelerators for migrating to the latest Windows Server edition.
•
Describe considerations and recommendations for server consolidation.
Lesson 4 Windows Server activation models
•
Licensing models overview
•
What is Windows Server activation?
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe licensing models for Windows Server.
•
Describe Windows Server activation.
21
20740 Installation, Storage, and Compute with Windows Server
Lab 1 Install and configure Windows Server
•
Exercise 1: Install Windows Server Core
•
Exercise 2: Perform post-installation tasks
•
Exercise 3: Perform remote management
By completing this lab, you’ll achieve the following knowledge and skills to:
•
Install Windows Server Core.
•
Complete post-installation tasks on Server Core.
•
Perform remote management.
By completing this module, you’ll achieve the knowledge and skills to:
•
Describe Windows Server.
•
Prepare and install Server Core.
•
Prepare for upgrades and migrations.
•
Explain Windows Server activation models.
Module 2 Configure local storage
This module explains how to manage disks and volumes in Windows Server. It also explains RAID
and RAID levels that you use and configure in Windows Server.
Lesson 1 Manage disks in Windows Server
•
Select a partition table format
•
Select a disk type
•
Select a file system
•
Implement ReFS
•
Demonstration: Configur ReFS
•
Use .vhd and .vhdx file types
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Select a partition table format.
•
Select a disk type.
•
Select a file system.
•
Implement and configure Resilient File System (ReFS).
•
Use .vhd and .vhdx file types.
22
20740 Installation, Storage, and Compute with Windows Server
Lesson 2 Manage volumes in Windows Server
•
What are disk volumes?
•
Options for managing volumes
•
Demonstration: Manage volumes
•
Extend and shrink a volume
•
What is RAID?
•
RAID levels overview
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe disk volumes.
•
Describe options for managing volumes.
•
Explain how to extend and shrink a volume.
•
Explain RAID.
•
Describe RAID levels.
Lab 2 Manage disks and volumes in Windows Server
•
Exercise 1: Create and manage volumes
•
Exercise 2: Resize volumes
•
Exercise 3: Manage virtual hard disks
By completing this lab, you’ll achieve the knowledge and skills to:
•
Create and manage volumes.
•
Resize volumes.
•
Manage virtual hard disks.
By completing this module, you’ll achieve the knowledge and skills to:
•
Explain how to manage disks in Windows Server.
•
Explain how to manage volumes in Windows Server.
23
20740 Installation, Storage, and Compute with Windows Server
Module 3 Implement enterprise storage
solutions
This module discusses direct-attached storage (DAS), network-attached storage (NAS), and storage
area networks (SANs). It also explains the purpose of Microsoft Internet Storage Name Service
(iSNS) Server, data center bridging (DCB), and Multipath I/O (MPIO). Additionally, this topic
compares Fibre Channel, Internet Small Computer System Interface (iSCSI), and Fibre Channel
over Ethernet (FCoE), and describes how to configure sharing in Windows Server.
Lesson 1 Overview of direct-attached storage, networkattached storage, and storage area networks
•
What is DAS?
•
What is NAS?
•
What’s a SAN?
•
Comparison and scenarios for usage
•
Block-level storage vs. file-level storage
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe DAS, NAS, and SANs.
•
Compare block-level storage and file-level storage.
Lesson 2 Compare Fibre Channel, iSCSI, and Fibre
Channel over Ethernet
•
What is Fibre Channel?
•
Considerations for implementing Fibre Channel
•
What is iSCSI?
•
iSCSI components
•
Considerations for implementing iSCSI
•
iSCSI usage scenarios
•
Demonstration: Configure an iSCSI target
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe and compare Fibre Channel, iSCSI, and FCoE.
•
Describe core storage components.
•
Configure iSCSI.
24
20740 Installation, Storage, and Compute with Windows Server
Lesson 3 Understanding iSNS, data center bridging, and
MPIO
•
What is iSNS?
•
What is DCB?
•
What is MPIO?
•
Demonstration: Configure MPIO
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe iSNS, data center bridging, and MPIO.
•
Configure MPIO.
Lesson 4 Configure sharing in Windows Server
•
What is SMB?
•
How to configure SMB shares
•
Demonstration: Configure SMB shares by using Server Manager and Windows PowerShell
•
What is NFS?
•
How to configure NFS shares
•
Demonstration: Configure an NFS share by using Server Manager
By completing this lesson, you’ll achieve the following knowledge and skills to:
•
Describe and configure SMB and SMB shares.
•
Describe and configure NFS and NFS shares.
Lab 3 Plan and configure storage technologies and
components
•
Exercise 1: Plan storage requirements
•
Exercise 2: Configure iSCSI storage
•
Exercise 3: Configure and manage the share infrastructure
By completing this lab, you’ll achieve the following knowledge and skills to:
•
Plan storage requirements.
•
Configure iSCSI storage.
•
Configure and manage the share infrastructure.
25
20740 Installation, Storage, and Compute with Windows Server
By completing this module, you’ll achieve the knowledge and skills to:
•
Describe DAS, NAS, and SANs.
•
Compare Fibre Channel iSCSI, and FCoE.
•
Explain the use of iSNS, DCB, and MPIO.
•
Configure sharing in Windows Server.
Module 4 Implement Storage Spaces and
Data Deduplication
This module describes how to implement and manage Storage Spaces and Data Deduplication.
Lesson 1 Implement Storage Spaces
•
Enterprise storage needs
•
What is the Storage Spaces feature?
•
Components and features of Storage Spaces
•
Changes to file and storage services in Windows Server 2022
•
Storage Spaces usage scenarios
•
Provision Storage Spaces
•
Demonstration: Configure Storage Spaces
•
Discussion: Compare Storage Spaces to other storage solutions
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe storage needs in enterprise.
•
Describe Storage Spaces, its components, features, and usage scenarios.
Lesson 2 Manage Storage Spaces
•
Manage Storage Spaces
•
Manage disk failure with Storage Spaces
•
Storage pool expansion
•
Demonstration: Manage Storage Spaces by using Windows PowerShell
•
Event logs and performance counters
26
20740 Installation, Storage, and Compute with Windows Server
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Manage storage spaces and storage pools.
•
Describe event logs and performance counters.
Lesson 3 Implement Data Deduplication
•
What is Data Deduplication?
•
Data Deduplication components
•
Deploy Data Deduplication
•
Demonstration: Implement Data Deduplication
•
Usage scenarios for Data Deduplication
•
Monitor and maintain Data Deduplication
•
Backup and restore considerations with Data Deduplication
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe Data Deduplication and its components.
•
Implement and monitor Data Deduplication.
•
Describe considerations for backup and restore with Data Deduplication.
Lab 4 Implement Storage Spaces
•
Exercise 1: Create a Storage Space
•
Exercise 2: Enable and configure storage tiering
By completing this lab, you’ll achieve the following knowledge and skills to:
•
Create a Storage Space
•
Enable and configure storage tiering
Lab 5 Implement Data Deduplication
•
Install Data Deduplication
•
Configure Data Deduplication
By completing this lab, you’ll achieve the knowledge and skills to:
•
Install Data Deduplication
•
Configure Data Deduplication
27
20740 Installation, Storage, and Compute with Windows Server
By completing this module, you’ll achieve the knowledge and skills to:
•
Describe and implement the Storage Spaces feature in the context of enterprise storage needs.
•
Manage and maintain Storage Spaces.
•
Describe and implement Data Deduplication.
Module 5 Install and configure Hyper-V and
virtual machines
This module provides an overview of Hyper-V and virtualization. It explains how to install Hyper-V,
and how to configure storage and networking on Hyper-V host servers. Additionally, it explains how
to configure and manage Hyper-V virtual machines.
Lesson 1 Overview of Hyper-V
•
What is Hyper-V?
•
Manage Hyper-V with Hyper-V Manager
•
Windows Server containers and Docker in Hyper-V
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe Hyper-V.
•
Manage Hyper-V with Hyper-V Manager.
•
Describe Windows Server containers and Docker in Hyper-V.
Lesson 2 Install Hyper-V
•
Prerequisites and requirements for installing Hyper-V
•
Demonstration: Install the Hyper-V role
•
Nested virtualization overview
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe prerequisites and requirements for installing Hyper-V.
•
Install the Hyper-V role.
•
Describe the nested virtualization feature.
28
20740 Installation, Storage, and Compute with Windows Server
Lesson 3 Configure storage on Hyper-V host servers
•
Storage options in Hyper-V
•
Considerations for VHD formats and types
•
Fibre Channel support in Hyper-V
•
Where to store VHDs?
•
Store VMs on SMB 3.0 shares
•
Demonstration: Manage storage in Hyper-V
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe storage options for Hyper-V.
•
Describe considerations for virtual hard disk formats and types.
•
Describe Fibre Channel support in Hyper-V.
•
Choose where to store VHDs.
•
Explain how to store VMs on SMB 3.0 shares.
•
Manage storage in Hyper-V.
Lesson 4 Configure networking on Hyper-V host servers
•
Types of Hyper-V virtual switches
•
Demonstration: Configure Hyper-V networks
•
Best practices for configuring Hyper-V virtual networks
•
Advanced networking features in Windows Server Hyper-V
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe types of Hyper-V virtual switches.
•
Configure Hyper-V networks.
•
Describe best practices for configuring Hyper-V virtual networks.
•
Describe advanced networking features in Windows Server Hyper-V.
Lesson 5 Configure Hyper-V VMs
•
What are VM configuration versions?
•
VM generation versions
•
Demonstration: Create a VM
•
VM settings
•
The Hot Adding feature in Hyper-V
29
20740 Installation, Storage, and Compute with Windows Server
•
Shielded VMs
•
Best practices for configuring VMs
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe VM configuration versions.
•
Describe VM generation versions.
•
Create a VM.
•
Describe the Hot Adding feature in Hyper-V.
•
Describe shielded VMs.
•
Describe VM settings
•
Describe best practices for configuring VMs.
Lesson 6 Manage Hyper-V VMs
•
Manage the VM state
•
Manage checkpoints
•
Demonstration: Create checkpoints
•
Import, export, and move VMs
•
PowerShell Direct overview
•
Demonstration: Use PowerShell Direct
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Manage virtual machine state.
•
Manage checkpoints.
•
Create checkpoints.
•
Describe how to Import and export VMs.
•
Describe PowerShell Direct.
•
Use PowerShell Direct.
Lab 6 Install and configure Hyper-V
•
Exercise 1: Verify installation of the Hyper-V server role
•
Exercise 2: Configure Hyper-V networks
•
Exercise 3: Create and configure a virtual machine
•
Exercise 4: Enable nested virtualization for a virtual machine
30
20740 Installation, Storage, and Compute with Windows Server
By completing this lab, you’ll achieve the knowledge and skills to:
•
Verify installation of the Hyper-V server role.
•
Configure Hyper-V networks.
•
Create and configure a virtual machine.
•
Enable nested virtualization for a virtual machine.
By completing this module, you’ll achieve the knowledge and skills to:
•
Describe the Hyper-V platform.
•
Install Hyper-V.
•
Configure storage on Hyper-V host servers.
•
Configure networking on Hyper-V host servers.
•
Configure Hyper-V VMs.
•
Manage Hyper-V VMs.
Module 6 Deploy and manage Windows
Server and Hyper-V containers
This module provides an overview of containers in Windows Server. Additionally, this topic explains
how to deploy Windows Server and Hyper-V containers. It also explains how to install, configure,
and manage containers by using Docker.
Lesson 1 Overview of containers in Windows Server
•
What are containers?
•
Overview of Windows Server containers
•
Overview of Hyper-V containers
•
Usage scenarios
•
Installation requirements for containers
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe Windows Server containers.
•
Describe Hyper-V containers.
•
Describe usage scenarios for containers.
•
Describe installation requirements for containers.
31
20740 Installation, Storage, and Compute with Windows Server
Lesson 2 Prepare for containers deployment
•
Prepare Windows Server containers
•
Prepare Hyper-V containers
•
Deploy package providers
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Prepare Windows Server containers for deployment.
•
Prepare Hyper-V containers for deployment.
•
Deploy package providers.
Lesson 3 Install, configure, and manage containers
•
What is Docker?
•
Docker components
•
Usage scenarios
•
Demonstration: Deploy Docker Enterprise Edition and use Docker to pull an image
•
Overview of management with Docker
•
Overview of Docker Hub
•
Docker with Azure
•
Demonstration: Deploy containers by using Docker
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe Docker, its components, and support for Docker in Windows Server 2022.
•
Describe usage scenarios for Docker.
•
Describe Docker management.
•
Describe Docker hub.
•
Describe Docker in Azure.
Lab 7 Install and configure containers
•
Exercise 1: Install and configure Windows Server containers by using Windows PowerShell
•
Exercise 2: Install and configure Windows Server containers by using Docker
By completing this lab, you’ll achieve the knowledge and skills to:
•
Install and configure Windows Server containers by using Windows PowerShell.
•
Install and configure Windows Server containers by using Docker.
32
20740 Installation, Storage, and Compute with Windows Server
By completing this module, you’ll achieve the knowledge and skills to:
•
Describe containers in Windows Server.
•
Deploy containers.
•
Explain how to install, configure, and manage containers using Docker.
Module 7 Overview of high availability and
disaster recovery
This module provides an overview of high availability and high availability with failover clustering in
Windows Server. It further explains how to plan high availability and disaster recovery solutions
with Hyper-V virtual machines (VMs). Additionally, this topic explains how to back up and restore
the Windows Server operating system and data by using Windows Server Backup.
Lesson 1 Define levels of availability
•
What is high availability?
•
What is continuous availability?
•
What is business continuity?
•
Create a disaster recovery plan
•
Highly available networking
•
Highly available storage
•
Highly available compute or hardware functions
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe high availability, continuous availability, and business continuity.
•
Create a disaster recovery plan.
•
Describe highly available networking.
•
Describe highly available storage.
•
Describe highly available compute or hardware functions.
Lesson 2 Plan high availability and disaster recovery
solutions with Hyper-V VMs
•
High availability considerations with Hyper-V VMs
•
Overview of live migration
•
Live migration requirements
33
20740 Installation, Storage, and Compute with Windows Server
•
Demonstration: Configure live migration (optional)
•
Provide high availability with Storage Migration
•
Demonstration: Configure Storage Migration (optional)
•
Overview of Hyper-V Replica
•
Plan for Hyper-V Replica
•
Implement Hyper-V Replica
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe high availability considerations with Hyper-V VMs.
•
Describe live migration and Storage Migration.
•
Describe, plan, and implement Hyper-V Replica.
Lesson 3 Network Load Balancing overview
•
What is Network Load Balancing
•
Deployment requirements for NLB
•
Configuration options for NLB
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe NLB.
•
Describe deployment requirements and configuration options for NLB.
Lesson 4 Back up and restore with Windows Server
Backup
•
Overview of Windows Server Backup
•
Implement backup and restore
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe Windows Server Backup.
•
Implement backup and restore by using Windows Server Backup.
Lesson 5 High availability with failover clustering in
Windows Server
•
What is failover clustering?
•
High availability with failover clustering
34
20740 Installation, Storage, and Compute with Windows Server
•
Clustering terminology and key components
•
Cluster quorum in Windows Server
•
Clustering roles
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe failover clustering and how it’s used for high availability.
•
Describe clustering terminology and roles.
•
Describe clustering components.
Lab 8 Plan and implement a high-availability and disasterrecovery solution
•
Exercise 1: Determine the appropriate high availability and disaster recovery solution
•
Exercise 2: Implement storage migration
•
Exercise 3: Configure Hyper-V replicas
By completing this lab, you’ll achieve the knowledge and skills to:
•
Determine the appropriate high availability and disaster recovery solution.
•
Implement storage migration.
•
Configure Hyper-V replicas.
By completing this module, you’ll achieve the knowledge and skills to:
•
Describe levels of availability.
•
Plan for high availability and disaster recovery solutions with Hyper-V VMs.
•
Describe Network Load Balancing (NLB).
•
Backup and restore data by using Windows Server Backup.
•
Describe high availability with failover clustering in Window Server.
Module 8 Implement and manage failover
clustering
This module explains how to plan for failover clustering. It also explains how to create, manage,
and troubleshoot a failover cluster.
35
20740 Installation, Storage, and Compute with Windows Server
Lesson 1 Plan for a failover cluster
•
Prepare to implement failover clustering
•
Failover-cluster storage
•
Hardware requirements for a failover-cluster implementation
•
Network requirements for a failover-cluster implementation
•
Demonstration: Verify a network adapter's RSS and RDMA compatibility on an SMB server
•
Infrastructure and software requirements for a failover cluster
•
Security and AD DS considerations
•
Quorum in Windows Server 2022
•
Plan for migrating and upgrading failover clusters
•
Plan for multi-site (stretched) clusters
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Prepare to implement a failover cluster.
•
Plan your failover-cluster storage.
•
Determine the hardware requirements for a failover-cluster implementation.
•
Forecast network requirements for a failover-cluster implementation.
•
Project infrastructure and software requirements for a failover cluster.
•
Identify security considerations.
•
Plan for quorum in Windows Server 2022.
•
Prepare for the migration and upgrading of failover clusters.
•
Plan for multisite (stretched) clusters.
Lesson 2 Create and configure a new failover cluster
•
The validation wizard and the cluster support-policy requirements
•
The process for creating a failover cluster
•
Demonstration: Create a failover cluster and review the validation wizard
•
Demonstration: Reviewing the validation wizard
•
Configure roles
•
Demonstration: Create a general file-server failover cluster
•
Manage failover clusters
•
Configure cluster properties
•
Configure failover and failback
36
20740 Installation, Storage, and Compute with Windows Server
•
Configure and manage cluster storage
•
Configure networking
•
Configure quorum options
•
Demonstration: Configure the quorum
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe the Validate a Configuration Wizard and cluster support-policy requirements.
•
Explain the process for creating a failover cluster.
•
Describe the process for configuring roles.
•
Explain how to manage cluster nodes.
•
Describe the process for configuring cluster properties.
•
Describe the process of configuring failover and failback.
•
Describe the process of configuring storage.
•
Describe the process of configuring networking.
•
Describe the process of configuring quorum options.
Lesson 3 Maintain a failover cluster
•
Monitor failover clusters
•
Back up and restore failover-cluster configuration
•
Manage and troubleshoot failover clusters
•
Manager cluster-network heartbeat traffic
•
What is Cluster-Aware Updating?
•
Demonstration: Configure CAU
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe how to monitor failover clusters.
•
Describe how to back up and restore failover cluster configurations.
•
Describe how to maintain failover clusters.
•
Describe how to manage cluster-network heartbeat traffic.
•
Describe cluster-aware updating.
Lesson 4 Troubleshoot a failover cluster
•
Communication issues overview
•
Repair the cluster name object in AD DS
37
20740 Installation, Storage, and Compute with Windows Server
•
Start a cluster with no quorum
•
Demonstration: Review the Cluster.log file
•
Monitor performance with failover clustering
•
Windows PowerShell troubleshooting cmdlets
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe how to detect communication issues.
•
Explain how to repair the cluster name object in AD DS.
•
Describe how to start a cluster with no quorum.
•
Describe how to review a Cluster.log file.
•
Describe how to monitor performance with failover clustering.
•
Describe how to use Event Viewer with failover clustering.
•
Explain how to interpret the output of Windows PowerShell troubleshooting cmdlets.
Lab 9 Implement a failover cluster
•
Exercise 1: Create a failover cluster
•
Exercise 2: Verify quorum settings and adding a node
By completing this lab, you’ll achieve the knowledge and skills to:
•
Create a failover cluster
•
Verify quorum settings and add a node
Lab 10 Manage a failover cluster
•
Exercise 1: Evict a node and verify quorum settings
•
Exercise 2: Change the quorum from Disk Witness to File Share Witness, and define node voting
•
Exercise 3: Verify high availability
By completing this lab, you’ll achieve the knowledge and skills to:
•
Evict a node and verify quorum settings.
•
Change the quorum from Disk Witness to File Share Witness, and defining node voting.
•
Verify high availability.
By completing this module, you’ll achieve the knowledge and skills to:
•
Plan for a failover-clustering implementation.
•
Create and configure a failover cluster.
38
20740 Installation, Storage, and Compute with Windows Server
•
Maintain a failover cluster.
•
Troubleshoot a failover cluster.
Module 9 Implement failover clustering for
Hyper-V virtual machines
This module describes how Hyper-V integrates with failover clustering. It also explains how to
implement Hyper-V virtual machines (VMs) in failover clusters.
Lesson 1 Overview of integrating Hyper-V in Windows
Server with failover clustering
•
Options for making applications and services highly available
•
How does a failover cluster work with Hyper-V nodes?
•
Failover clustering features specific for Hyper-V
•
Best practices for implementing high availability in a virtual environment
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe options for making applications and services highly available.
•
Describe how failover clustering works with Hyper-V nodes.
•
Describe failover clustering with Windows Server Hyper-V features.
•
Describe best practices for implementing high availability in a virtual environment.
Lesson 2 Implement and maintain Hyper-V VMs on failover
clusters
•
Components of Hyper-V clusters
•
Prerequisites for implementing Hyper-V failover clusters
•
Implement Hyper-V VMs on a failover cluster
•
Configure Cluster Shared Volumes (CSVs)
•
Configure a shared VHD
•
Implement Scale-Out File Servers for VMs
•
Considerations for implementing Hyper-V clusters
•
Maintain and monitor VMs in clusters
•
Demonstration: Implement failover clustering with Hyper-V
39
20740 Installation, Storage, and Compute with Windows Server
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe the components of a Hyper-V cluster.
•
Describe the prerequisites for implementing Hyper-V failover clusters.
•
Implement Hyper-V VMs on a failover cluster.
•
Configure Clustered Shared Volumes (CSVs).
•
Configure a shared VHD.
•
Implement Scale-Out File Servers for VMs.
•
Describe considerations for implementing Hyper-V VMs in a cluster.
•
Explain how to maintain and monitor VMs in clusters.
•
Implement failover clustering.
Lesson 3 Key features for VMs in a clustered environment
•
Overview of Network Health Protection
•
Overview of actions taken on VMs when a host shuts down
•
Overview of drain on shutdown
•
Demonstration: Configure drain on shutdown
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe Network Health Protection.
•
Explain the actions taken on VMs when a host shuts down.
•
Explain the drain on shutdown.
•
Configure the drain on shutdown.
Lab 11 Implement failover clustering with Hyper-V
•
Exercise 1: Configure virtual environment and iSCSI storage
•
Exercise 2: Configurie a failover cluster for Hyper-V
•
Exercise 3: Configure a highly available VM
By completing this lab, you’ll achieve the knowledge and skills to:
•
Configure virtual environment and iSCSI storage.
•
Configure a failover cluster for Hyper-V.
•
Configure a highly available VM.
40
20740 Installation, Storage, and Compute with Windows Server
By completing this module, you’ll achieve the knowledge and skills to:
•
Explain integrating Hyper-V in Windows Server with failover clustering.
•
Implement and maintain Hyper-V virtual machines on failover clusters.
•
Describe key features for VMs in a clustered environment.
Module 10 Create and manage deployment
images
This module provides an overview of the Windows Server image deployment process. It also
explains how to create and manage deployment images by using the Microsoft Deployment Toolkit
(MDT). Additionally, it describes different workloads in the virtual machine environment.
Lesson 1 Introduction to deployment images
•
Overview of images
•
Overview of image-based installation tools
•
Create, update, and maintain images
•
Windows ADK
•
Windows Deployment Services
•
Microsoft Deployment Toolkit
•
Demonstration: Prepare a Windows Server 2022 image in the MDT
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe images and image-based installation tools.
•
Create, update, and maintain images.
•
Describe Windows ADK.
•
Describe Windows Deployment Services (WDS).
•
Describe the MDT.
Lesson 2 Create and manage deployment images by using
the MDT
•
Create images in the MDT
•
Deploy images in the MDT
41
20740 Installation, Storage, and Compute with Windows Server
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Create images in the MDT.
•
Deploy images in the MDT.
Lesson 3 VM environments for different workloads
•
Evaluation factors
•
Overview of virtualization accelerators
•
Assessment features of the MAP toolkit
•
Demonstration: Assess the computing environment by using the MAP toolkit
•
Design a solution for server virtualization
Lab 12 Use the MDT to deploy Windows Server
•
Exercise 1: Configure MDT
•
Exercise 2: Create and deploy an image
By completing this lab, you’ll achieve the knowledge and skills to:
•
Introduce deployment images.
•
Create and manage deployment images by using MDT.
•
Describe virtual machine environments for different workloads.
By completing this module, you’ll achieve the knowledge and skills to:
•
Describe the Windows Server image-deployment process.
•
Create and manage deployment images by using the MDT.
•
Describe VM environments for different workloads.
Module 11 Maintain and monitor Windows
Server installations
This module provides an overview on Windows Server Update Services (WSUS) and the
requirements to implement WSUS. It explains how to manage the update process with WSUS.
Additionally, this topic provides an overview of Windows PowerShell Desired State Configuration
(DSC) and Windows Server monitoring tools. Finally, this topic describes how to use Performance
Monitor, and how to manage event logs.
42
20740 Installation, Storage, and Compute with Windows Server
Lesson 1 WSUS overview and deployment options
•
What is WSUS?
•
WSUS server deployment options
•
The WSUS update-management process
•
Server requirements for WSUS
•
Configure clients to use WSUS
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe WSUS server and its deployment options.
•
Describe WSUS update managements.
•
Describe how to configure WSUS server and WSUS clients.
Lesson 2 Update management process with WSUS
•
WSUS administration
•
What are computer groups?
•
Approve updates
•
Configure automatic updates
•
Demonstration: Deploy updates by using WSUS
•
WSUS reporting
•
WSUS troubleshooting
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe WSUS administration.
•
Describe computer groups.
•
Describe how to approve updates.
•
Describe how to perform WSUS reporting and troubleshooting.
Lesson 3 Overview of PowerShell Desired State
Configuration
•
Benefits of Windows PowerShell DSC
•
Requirements for Windows PowerShell DSC
•
Implement Windows PowerShell DSC
•
Troubleshoot Windows PowerShell DSC
43
20740 Installation, Storage, and Compute with Windows Server
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe the benefits of Windows PowerShell DSC.
•
Describe the requirements for Windows PowerShell DSC.
•
Describe how to implement Windows PowerShell DSC.
•
Describe how to troubleshoot Windows PowerShell DSC.
Lesson 4 Overview of Windows Server monitoring tools
•
Overview of Task Manager
•
Overview of Performance Monitor
•
Overview of Resource Monitor
•
Overview of Reliability Monitor
•
Overview of Event Viewer
•
Monitor a server with Server Manager
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe tools in Windows Server for monitoring.
•
Use Server Manager for monitoring.
Lesson 5 Use Performance Monitor
•
Overview of baseline, trends, and capacity planning
•
What are data collector sets?
•
Demonstration: Review performance with Performance Monitor
•
Monitor network infrastructure services
•
Considerations for monitoring VMs
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe baseline, trends, and capacity planning.
•
Describe data collector sets.
•
Describe how to monitor VMs.
Lesson 6 Monitor Event Logs
•
Use Server Manager to review event logs
•
What is a custom view?
•
Demonstration: Create a custom view
44
20740 Installation, Storage, and Compute with Windows Server
•
What are event-log subscriptions?
•
Demonstration: Configure an event subscription
By completing this lesson, you’ll achieve the knowledge and skills to:
•
Describe Event logs.
•
Describe custom view.
•
Describe event subscriptions.
Lab 13 Implementing WSUS and deploying updates
•
Exercise 1: Implement WSUS
•
Exercise 2: Configure update settings
•
Exercise 3: Approve and deploy an update by using WSUS
By completing this lab, you’ll achieve the following knowledge and skills to:
•
Configure update settings.
•
Approve and deploy an update by using WSUS.
By completing this module, you’ll achieve the knowledge and skills to:
•
Describe the purpose of WSUS and the requirements to implement WSUS.
•
Manage the update process with WSUS.
•
Describe the purpose and benefits of Windows PowerShell DSC.
•
Describe the monitoring tools available in Windows Server.
•
Use Performance Monitor.
•
Manage event logs.
45
20740 Installation, Storage, and Compute with Windows Server
Module 1: Install, upgrade, and
migrate servers and workloads
For your organization to effectively manage storage and compute functions, you need to
understand the new features available in Windows Server 2022. This module introduces you to
Windows Server 2022 and describes the various editions and installation options. You will also
learn how to install Server Core, which is now a default installation option for Windows Server.
You will be introduced to planning a server and migration strategy, along with how to perform a
migration of server roles and workloads. Finally, you will learn how to choose the most appropriate
activation model for your organization.
By completing this module, you’ll achieve the knowledge and skills to:

Describe Windows Server.

Prepare and install Server Core.

Prepare for upgrades and migrations.

Explain Windows Server activation models.
Lesson 1: Introducing Windows Server
Windows Server has been around in one form or another for three decades while constantly
evolving and improving. Windows Server 2022 is the latest version of Microsoft’s Windows Server
operating system (OS). In this lesson, you’ll learn about some of the fundamental administrative
and configuration tasks for Windows Server and available Windows Server administration and
configuration tools.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe Windows Server OS.

Explain how to select a suitable Windows Server edition.

Describe hardware requirements for Windows Server.

Describe installation options for Windows Server.

Explain how to manage Windows Server remotely.

Explain how to use Windows PowerShell to manage Windows Servers.

Describe Windows Server servicing channels
46
20740 Installation, Storage, and Compute with Windows Server
Overview of the Windows Server OS
Microsoft Windows is a well-known brand, having been around for several decades, and it’s the
most popular OS for personal computers worldwide. For personal environments, Microsoft provides
client versions of Windows, such as Windows 7, Windows 10, or Windows 11. Unlike Windows for
personal computers, Windows Server is a line of operating systems that Microsoft specifically
creates for use on a server. Servers are usually very powerful computers that are designed to run
constantly and provide resources for other computers. This means that, in almost all cases,
Windows Server is only used in business environments.
Microsoft has published Windows Server under this name since Windows Server 2000 launched in
1999. Windows Server 2000 was a big milestone, as it introduced many new features, like Active
Directory, which are being used even today. Before Windows Server 2000, server versions of
Windows were available. For example, Windows NT 4.0 was available in both workstation (for
general use) and server editions.
The latest version of Windows Server, at the time of writing this course, was Windows Server 2022.
Windows Server 2022 introduces some new features and enhances some existing features. These
new and improved features include the following:

Security improvements, including:
o
o

Secured-core server. Built on certified secure hardware from original equipment
manufacturers (OEMs). A secured-core server uses a combination of factors to help secure
the server, including:
•
Hardware root-of-trust.
•
Firmware protection.
•
UEFI secure boot.
•
Virtualization-based security.
Secure connectivity:
•
HTTPS and TLS 1.3 enabled by default.
•
Secure Domain Name System (DNS).
•
Serner Message Block (SMB) AES-256 encryption.
•
SMB Direct and Remote Direct Memory Access (RDMA) encryption.
•
SMB over QUIC (Datacenter: Azure edition only).
Azure hybrid improvements:
o
Azure Arc-enabled servers.
o
Azure Automanage—Hotpatch.
o
Improvements for Azure integration in Windows Admin Center.

Application platform improvements, including Windows Containers improvements.

Inclusion of Microsoft Edge browser.
47
20740 Installation, Storage, and Compute with Windows Server

Network performance improvements.

ReFS file-level snapshot support.
How to select a suitable Windows Server edition
Windows Server 2022 is available in four editions. The installation files are the same for each
edition. During installation, you can choose the edition you want. However, you’ll need to enter a
product key that matches the edition you’ve chosen. When you purchase Windows Server 2022,
you’re given the product key you can use.

Windows Server Essentials. Aimed at small businesses with up to 25 users and 50 devices.

Windows Server Standard. Aimed at businesses that mainly have a need for physical servers,
but with a limited need for virtual servers.

Windows Datacenter. Aimed at organizations that want, or already have, an infrastructure that
has a large number of virtual servers.

Datacenter: Azure editions. Aimed at organizations that want to move all or most of their
servers to the cloud.
Note: Servers can be physical computers or virtual computers, and you can configure one or
more virtual servers on one physical server. For example, you could run 10 virtual servers on
just one physical server. Advantages of virtualization include reduced physical space
requirements in your datacenters, reduced energy usage, efficient use of hardware, and cost
savings on licensing. You use a feature called Hyper-V to create and manage virtual
computers.
In this course, we focus on the most used editions: Standard and Datacenter.
The Standard edition supports most of the same features as the Datacenter edition. The major
differences between the two editions are the number of virtual servers you can create under the
license of the host OS, and support for certain Hyper-V features. A Standard edition license entitles
you to configure one physical server, with up to two virtual servers running on it under the same
license used for the physical host. The Datacenter edition license allows you to have an unlimited
number of virtual servers running on one physical server.
There are some advanced features that are available in the Datacenter edition which aren’t
supported in the Standard edition:

Software-defined Networking (SDN) and Network Controller. Provides a way to centrally
configure and manage networks and network services such as switching, routing, and load
balancing in your data center.

Storage Spaces Direct. Enables you to cluster servers with internal storage into a softwaredefined storage solution.

Host Guardian Hyper-V Support. Prevents a malicious actor from using stolen virtual servers on
their own physical servers.
48
20740 Installation, Storage, and Compute with Windows Server
49
20740 Installation, Storage, and Compute with Windows Server
Hardware requirements for Windows Server
To ensure a smooth experience when installing Windows Server 2022, you should first check that
the computer meets the minimum hardware requirements that Microsoft specifies, which include:



A 64-bit processor, with a minimum clock speed of 1.4 gigahertz (GHz) and support for the
following features:
o
Support for Data Execution Prevention (DEP). DEP is a security feature that blocks certain
types of attacks known as buffer overflow. Processor manufacturers give it different
names. Intel refers to it as XD (Execute Disable) and AMD refers to it as NX (No Execute).
o
Support for Second Level Address Translation (SLAT).
At least 512 megabytes (MB) of random access memory (RAM) for Server Core. For the
Desktop Experience, the minimum is 2 GB.
At least 32 GB of free disk space for Server Core and 36 GB for the Desktop Experience.
Additionally, it’s helpful to know the maximum hardware resources that Windows Server 2022
supports, although you’re unlikely to reach these limits:

Maximum RAM: 48 terabytes (TB)

Maximum number of Processors: 64

Maximum number of processor cores: unlimited
Overview of the installation process and options for
Windows Server
You can download Windows Server 2022 from the Microsoft evaluation center or the Microsoft
Volume Licensing Service Center if your organization has a volume license agreement with Microsoft.
The download is available as an ISO file, which is an image of a bootable DVD, or as a virtual hard
disk (VHD) file that you can use to set up a virtual machine (VM). You can use these files in the
following ways:

ISO file. Create a bootable DVD. Download the ISO file and use a DVD burner to burn it to a
DVD. You can then turn on your computer from the DVD.

ISO file. Attach the ISO file to a VM in Hyper-V, and then turn on the VM to begin the
installation.

VHD file. The VHD file already has the OS installed on it, and you can attach it to a VM. All you
need to do is confirm your language, accept the license, and assign a password to the
administrator account.
50
20740 Installation, Storage, and Compute with Windows Server
When you use the ISO file to turn on a computer, the Windows setup program starts, prompts you
with a few questions, and then installs the OS, as indicated by your answers. During installation,
the following options are available:

Confirm or change the language, time, currency format, and keyboard layout. These will default
to US format and keyboard layout.

Install Now or Repair your computer. The Repair your computer option is there to help you
troubleshoot a computer that already has the OS installed but is experiencing problems. To
install Windows Server 2022, select Install Now.

Accept the license agreement. If you don’t select this box, the setup program won’t proceed.

Upgrade or Custom install:

o
If you select Upgrade: Install Microsoft Server Operating System and keep files, settings
and applications, the setup program attempts to upgrade the current OS to Windows
Server 2022, while preserving your files, settings, and applications.
o
If you want to install an OS for the first time on the computer, or you want a “clean”
installation that copies nothing from any existing OS, select Custom: Install Microsoft
Server Operating System only (advanced).
Select the OS you want to install, as Figure 1 depicts:
o
Server Core versions are Windows Server 2022 Standard or Windows Server 2022
Datacenter.
o
If you wish to install the full graphical user interface (GUI), select either Windows Server
2022 Standard (Desktop Experience) or Windows Server 2022 Datacenter (Desktop
Experience).
Figure 1: Microsoft Server OS Setup: Select the operating system to install page
51
20740 Installation, Storage, and Compute with Windows Server

Where do you want to install the OS?
o
This dialog box lists all the disks and volumes the setup program has found on the
computer, along with the size and the amount of free space on each disk and volume.
o
Choose an existing volume, delete volumes, or create new volumes, as Figure 2 depicts.
You can also choose disk space that hasn’t yet been converted into a volume, listed as
unallocated space. If you select unallocated space, the setup program automatically
creates a volume in that space.
Figure 2: Microsoft Server OS Setup: Where to install the operating system page
After you select a volume to install on, as Figure 2 depicts, the program copies files to the
hard disk and restarts, after which the setup program continues, and you’re prompted to:
•
Choose a password for the administrator account. You must choose a strong password
that contains a combination of lower- and upper-case letters, numbers, and nonalphanumeric characters (such as ! # $, and similar).
How can you manage Windows Server remotely?
As networks have grown, and the number of server-based resources has increased, the practicality
of being able to manage servers locally has diminished. This is especially true if your servers are
all rack-mounted in an on-premises datacenter, or even hosted in a Microsoft datacenter in Azure.
Therefore, it’s important that you understand how to remotely manage servers—with whichever
management tool you choose to use.
52
20740 Installation, Storage, and Compute with Windows Server
Windows Admin Center
Windows Admin Center is a web app that provides users with several capabilities. Based on
extensions that provide additional functionality, you can use Windows Admin Center in Microsoft
Edge or a compatible browser to manage and administer:

Client computers running Windows 10 and Windows 11 operating systems.

Computers installed with Windows Server.

Windows Server clusters.

Azure resources, such as:
o
Azure VMs.
o
Azure Backup.
o
Azure Network Adapter.
o
Azure File Sync.
o
Azure Monitor.
o
Azure Arc.
You must download and install Windows Admin Center because Windows Server doesn’t include it,
by default.
When you install Windows Admin Center, typically on a workstation running Windows 11, you’re
prompted to open TCP port 6516, which is used to access Windows Admin Center on the local
computer rather than to connect to remote resources.
If the server resources that you want to manage are in the same Active Directory Domain Services
(AD DS) forest as the management computer where Windows Admin Center is installed, you can
authenticate to the remote servers using an administrative AD DS account.
If you aren’t operating in a domain environment, you might need to modify the management
computer’s trusted hosts setting to include the name of any remotely managed computers. Do this
by using the following PowerShell command, which in this case, uses the wildcard value of * to
mean all remote hosts:
Set-Item WSMan:localhost\client\trustedhosts -value *
In addition to authentication and trusted hosts, the remote server computer must be enabled for
remote management via Windows Remote Management (WinRM). For servers, this is the case by
default. However, if you need to manually perform this step, run the following command:
WinRM quickconfig
This command enables the Windows Remote Management listener service and enables the
required firewall extensions. If you’re managing VMs that are running Windows Server in Azure,
you’ll also almost certainly need to open TCP ports 5985 and 5986 on the Network Security Group
to which the VM’s network interface is connected. These ports are for remote management.
53
20740 Installation, Storage, and Compute with Windows Server
Remote Desktop
When you use Remote Desktop to start a session with a remote server, you’ll observe the desktop
of the remote server. You can interact with the remote server as if you signed in locally at the
server. You can run applications, configure settings, and add and remove roles and features
with ease.
To use Remote Desktop, it must be enabled at the remote server. You can enable remote desktop
using Server Manager. On the navigation pane, select Local Server. Select Remote Desktop on the
details pane. If the remote server is configured as Server Core, you can use the SConfig utility to
enable Remote Desktop.
Note: If you use Remote Desktop to connect to a Server Core installation of Windows Server
2022, you’ll use a command prompt window, as there isn’t a GUI. You must perform all
management tasks by using command-line tools.
Microsoft Management Console
Many administrative tools in Windows Server 2022 are manageable through the graphical tool
Microsoft Management Console (MMC). Within MMC, you can add and remove snap-ins from the
File menu. These snap-ins are individual management tools. For example, in MMC, you can add
Disk Management as a snap-in that allows you to configure the disks in the server, or you can add
the Users and Groups snap-in to manage local user and group accounts.
To create an MMC, select Start, enter mmc.msc, and then select Enter. After you create an MMC
and add a snap-in, you’re prompted as to whether you want to manage the Local computer with
the MMC snap-in or Another computer. You can choose Another computer and then enter the
name of the Windows Server 2022 server you want to manage. Figure 3 depicts these options:
Figure 3: MMC snap-ins
54
20740 Installation, Storage, and Compute with Windows Server
Many MMCs that are automatically created in Windows, such as Computer Management, are
focused on the local computer. To use these MMCs remotely, you can right-click or access the
context menu of the snap-in on the navigation pane, and then select Connect to another computer.
Most snap-ins work remotely to allow full management of a remote server. However, one exception
is Device Manager. If you want to manage devices remotely, you can use Remote Desktop to
connect to the server instead.
Server Manager
Server Manager is built into Windows Server with Desktop Experience. In fact, it opens by default.
You can use Server Manager to perform a range of functions on both a local server and, when
configured, remote servers.
You can use Server Manager to perform the following administrative tasks:

Reconfigure the local server’s basic settings, including the:
o
Computer name.
o
Windows Update settings.
o
Remote Desktop settings.
o

Workgroup or domain settings.
o
IP configuration.
o
Windows Activation.
Add additional servers.

Create a server group so you can manage multiple servers.

Add roles and features to the selected server.

Connect the selected server to cloud services.
Tip: You can easily connect servers to Azure services by using Windows Admin Center.

Manage specific roles and services in either of the following ways:
o
o
Launching an additional management console from the Tools menu.
Selecting the appropriate role or service from the navigation pane.

Review events from connected servers.

Run and interpret the results from a Best Practices Analyzer scan.

Review the performance of the selected servers
55
20740 Installation, Storage, and Compute with Windows Server
Command-line tools
There are a variety of command-line tools available for remote management.
PowerShell
PowerShell is a command-line management tool and scripting environment built into Windows
Server 2022. It has a powerful set of commands, known as cmdlets. Certain PowerShell
commands have a parameter, ComputerName, that can be used to specify that the command
should act on a remote computer, not the local one.
For example, the cmdlet Get-Service can be used remotely:
PS C:\Users\Administrator> Get-Service -ComputerName server1
You can also create a remote PowerShell session. Within that session, all commands you enter are
executed at the remote server. To allow this functionality on the remote server, it must be enabled
with the Enable-PSRemoting cmdlet. On the computer that you’ll be remoting from, you can use
the Enter-PSSession cmdlet to initiate a session with the remote server.
Note: One thing that makes PowerShell a very powerful tool is the ability to create scripts.
You can use the Window PowerShell ISE to easily create PowerShell scripts to automate
various tasks, as Figure 4 depicts.
Figure 4: Windows PowerShell Integrated Scripting Environment (ISE)
56
20740 Installation, Storage, and Compute with Windows Server
Windows Remote Shell
To use standard command-line tools remotely, you can use Windows Remote Shell (WinRS). On the
remote server, remote management must be enabled. You can enable remote management
through Server Manager or with the command WinRM quickconfig on the remote server.
Note: Windows Remote Management (WinRM) is enabled by default on Windows Server
2022, as features of Server Manager rely on it.
Remote Server Administration Tools (RSAT) overview
If you want to manage your servers remotely from a client computer running other operating
systems, such as Windows 10 or Windows 11, you might find that many of the tools for server
management aren’t available by default.
To add RSAT in Windows 10 or 11, select Start, enter Manage optional features, and then select
Enter. In the list of optional RSAT features, as Figure 5 depicts, you can select:

AD DS and Lightweight Directory Services Tools.

DNS Server Tools.

File Services Tools.

Group Policy Management Tools,

Remote Desktop Services Tools.

Server Manager

Windows Server Update Services Tools
Figure 5: RSAT optional features
57
20740 Installation, Storage, and Compute with Windows Server
Firewall issues for remote management
Some attempts at remote management might fail if the Windows (or third-party) firewall on the
remote server is blocking access. If your attempts to connect remotely fail, you should research
what rules you must enable on the remote server’s firewall to allow remote management.
Use Windows PowerShell to manage Windows Servers
In addition to the graphical tools provided in Desktop Experience, you can also use command-line
tools to perform administrative tasks. This is beneficial when you have to perform the same task
several times or when the task is time-consuming or complex to perform manually.
Windows Server provides two command-line interfaces:

Command Prompt. Provides access to many tools with which you can manage many aspects
of Windows.

Windows PowerShell. Provides access to a command-line interface with a more structured
syntax than Command Prompt.
Tip: You can run Command Prompt tools in Windows PowerShell, but you can’t run
PowerShell cmdlets in Command Prompt.
Although many tools still run within Command Prompt, Windows PowerShell has become
ubiquitous in managing Microsoft implementation. You can use PowerShell cmdlets and scripts to
perform virtually any management task, including:

Resources on your on-premises Windows Servers.

Microsoft Azure Active Directory (Azure AD).

Hosted VMs in Azure.

Microsoft 365 services.
Overview of PowerShell syntax
Windows PowerShell cmdlets are comprised of two elements:

Verb. The verb defines what you’re doing. For example, Get retrieves information about an
object, and Set updates the object’s properties.

Noun. The noun defines the thing you want to work with—the subject of the verb’s action. For
example, LocalUser, or Service.
58
20740 Installation, Storage, and Compute with Windows Server
You combine verbs and nouns to create a cmdlet. By adding parameters to the cmdlet, you create
a PowerShell command. You can then combine commands to create simple scripts. For example,
the following PowerShell command displays a list of services that are currently running:
Get-Service | Where-Object {$_.Status -eq "Running"}
The preceding example combines the cmdlet Get-Service with the Where-Object cmdlet by
using the pipe operator (|). This causes the result of the first cmdlet to pass to the second cmdlet
for processing.
In this next example, the same Get-Service cmdlet runs, and again the results are piped to the
Select-Object cmdlet. In this instance, the Name and Status properties are selected. These
filtered results then are piped to a third cmdlet, Export-CSV, which creates a comma-separated
value (CSV) file called service.csv, which contains a list of service names and their status:
Get-Service | Select-Object Name, Status | Export-CSV c:\service.csv
However, you can also build quite complex PowerShell commands without necessarily needing to
create scripts. This next example uses two cmdlets from the Active Directory module for Windows
PowerShell to make changes to the users in the marketing department of the Contoso
organization:
Get-ADUser -Filter 'Name -like "*"' -SearchBase
"OU=Marketing,DC=Contoso,DC=Com" | Set-ADUser -Description "Member of the
Marketing Department"
Tip: A Windows PowerShell module adds functionality to PowerShell and enables you to
manage additional applications and services.
What is PowerShell remoting?
Windows PowerShell remoting enables you to manage remote Windows Server and Windows client
computers by using the same PowerShell cmdlets and syntax as you do to manage those
resources interactively.
Windows PowerShell requires that Windows Remote Management (WinRM) is enabled on target
systems that you want to remotely manage. Windows Server is enabled for WinRM by default.
However, if you must enable WinRM manually, you can use one of the following methods:

At an elevated Windows PowerShell prompt, run enable-psremoting -force.

At an elevated command prompt window, run Winrm quickconfig.
As with the Windows Admin Center setup, these commands start the Windows Remote
Management listener service and enable the required firewall extensions.
59
20740 Installation, Storage, and Compute with Windows Server
If your management computer can authenticate to the target managed computer using Kerberos
(in other words, both computers are in the same AD DS forest), then you can begin managing the
remote computer. However, if your management workstation isn’t in the same AD DS forest, you
must add the target computer to your computer’s trusted hosts list. You do this as described
earlier in the section about Windows Admin Center:
After you’ve enabled WinRM and configured trusts hosts settings as needed, you can establish a
remote connection to your target server. If you want to configure multiple servers, you merely need
to specify multiple computer names in your commands.
The following PowerShell command executes Get-Service cmdlet we discussed earlier on a remote
computer called LON-CL1:
Invoke-Command –ComputerName LON-CL1 –ScriptBlock { Get-Service | SelectObject Name, Status}
You can also create a remote PowerShell session using the New-PSSession cmdlet. In this
example, a remote session is established with LON-CL1:
$remotesession = New-PSSession –ComputerName LON-CL1
Then, to access the remote session, run the following:
Enter-PSSession $remotesession
Thereafter, you’re presented with a PowerShell prompt that has the remote computer name as a
prefix. You can then run any PowerShell commands in the open window, and they execute within
the context defined by the New-PSSession cmdlet.
What is PowerShell Direct?
If you’re running Windows Server workloads in a virtualized environment, it’s possible to use
PowerShell Direct. By using PowerShell Direct, you can execute PowerShell commands against
VMs running on a computer without needing to use PowerShell Remoting or having to sign in on
the VM interactively. Effectively, you run the cmdlet or script inside the VM, bypassing
considerations for network and firewall configuration.
Windows Server updates and servicing channels
Microsoft regularly releases updates for all its current operating systems. Some are feature updates,
which provide new features and minor bug fixes. Alternatively, quality updates generally don’t offer
new functionality, but instead focus on security vulnerabilities and fixes.
Because updates can be disruptive, especially if they require a restart, the setting on servers is
usually set to Download updates automatically and notify when they are ready to be installed.
When notified, administrators can apply updates during scheduled maintenance windows. (On a
client computer, you can set the update options to download and install updates automatically).
60
20740 Installation, Storage, and Compute with Windows Server
You can configure update settings in Server Manager. Select Local Server on the navigation pane
and then select the link next to Windows Update on the details pane.
Servicing channels determine how frequently updates are applied to the server. In previous versions
of Windows Server, there was the choice of the LTSC or the Semi-Annual Channel (SAC).
The LTSC has a new release every two or three years. Organizations that select this channel are
entitled to five years of mainstream support and an additional five years of extended support. In
many respects, this servicing channel provides behavior similar to periodically upgrading servers
between major OS versions.
The channel provides stability in the OS feature set without compromising stability. However,
servers in the LTSC still receive important security and nonsecurity updates; but they don’t receive
features updates.
Important: Starting with Windows Server 2022, this is the only primary release channel.
There won’t be future SAC releases of Windows Server.
You can select the LTSC for Windows Server both with Desktop Experience and Server Core.
Lesson 2: Prepare and install Server Core
When you prepare to install Windows Server 2022, you must understand how to select among the
installation options: Windows Server with Desktop Experience or Server Core. This lesson
describes Server Core installation of Windows Server 2022.
The installation process for Windows Server 2022 requires minimal input from the installer.
However, following the installation, you must configure a number of important settings before you
can use your server. In addition, because Server Core provides no graphical management tools,
you must know how to enable and perform the remote management of your server infrastructure.
This lesson identifies the important post-installation configuration options and explains how to
enable and use the remote management tools.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe Server Core benefits

Plan for Server core deployment

Configure Server Core after installation

Manage and service Server Core
61
20740 Installation, Storage, and Compute with Windows Server
Server Core overview
During installation, you can choose between the command-line only Server Core or the graphical
Desktop Experience. If you install with Desktop Experience, you’re installing the full GUI you’re
probably familiar with if you’ve used any current Windows version. Server Core installs without the
GUI, so you must perform all management at the command line.
While Desktop Experience is easier to use and is more familiar, Server Core provides a slimmeddown installation that provides better performance on any given hardware. Because there isn’t a
GUI, several other apps that run in the GUI and various services aren’t installed. There’s also less
to patch, less space used on your disks, and it’s more secure. However, when you install Server
Core, you can’t convert it to a server with the graphical Desktop Experience.
However, the downside of Server Core is that to manage it you need to be familiar with commandline tools and Windows PowerShell, or you need to connect remotely to use graphical tools.
PowerShell is built into all versions of Windows and provides commands to fully manage the OS.
There are several roles and features that aren’t available in Server Core. If you find that you need
one of these, you’ll have to install Windows Server 2022 Desktop Experience instead. Roles that
aren’t available in Server Core include:

Fax Server (Fax).

MultiPoint Services (MultiPointServerRole).

Network Policy and Access Services (NPAS).

Windows Deployment Services (WDS).
You also can review the complete list of roles and features that aren’t available in Server Core at
Roles, Role Services, and Features not in Windows Server – Server Core.
Plan for Server Core deployment
As you learned before, Server Core is the default installation option when you run the Windows
Server 2022 Setup Wizard. It uses fewer hardware resources than the installation option that
runs the full GUI. Instead of using GUI-based tools, you can manage Server Core locally by using
Windows PowerShell or a command-line interface, or you can manage it remotely by using one of
the remote management options described earlier in this module.
Before you plan for Server Core deployment, you need to understand the key advantages and
differences it has when compared to Windows Server with Desktop Experience. Server Core has
the following advantages over the full Windows Server 2022 installation option:

Reduced update requirements. Because Server Core installs fewer components, its
deployment requires you to install fewer software updates. This reduces the number of
monthly restarts required and the amount of time required for an administrator to service
Server Core.
62
20740 Installation, Storage, and Compute with Windows Server

A reduced hardware footprint. Computers running Server Core require less RAM and less hard
drive space. When Server Core is virtualized, you can deploy more servers on the same host.

Smaller attack surface. Installing fewer components, especially the client interface, reduces
the potential surface for security vulnerabilities for hackers to exploit.
However, there are drawbacks to installing Server Core instead of Windows Server with Desktop
Experience. If an application depends on the GUI, it will fail when the GUI call occurs. For example,
an error might sometimes occur when a dialog box appears. Also, as mentioned previously, there
are more limited local management options. However, when you’re connected locally, you can also
use the tools that Table 1 lists to manage Server Core deployments of Windows Server 2022:
Table 1: Server Core management tools
Tool
Function
Cmd.exe
Allows you to run traditional command-line tools, such as ping.exe,
ipconfig.exe, and netsh.exe.
PowerShell.exe
Launches a Windows PowerShell session on the Server Core deployment.
You then can perform Windows PowerShell tasks normally. Windows
Server 2016 comes with Windows PowerShell version 5.0 installed.
Regedt32.exe
Provides registry access within the Server Core environment.
Msinfo32.exe
Allows you to review system information about the Server Core
deployment.
Sconfig.cmd
Serves as a command-line, menu-driven tool to perform common server
administration tasks.
Taskmgr.exe
Launches Task Manager.
When planning for Server Core deployment, you also need to be aware of what roles and features
are supported and those that aren’t. This might impact your plans significantly and affect your
decision about whether to deploy it.
Learn more: For a current list of available roles and features in Server Core, refer to Roles,
Role Services, and Features included in Windows Server- Server Core.
Demonstration: Install Server Core
For high-level demonstration steps, refer to the Notes, which are in the accompanying Microsoft
PowerPoint presentation.
63
20740 Installation, Storage, and Compute with Windows Server
Configure Server Core after installation
The Server Core installation process aligns closely with that of Desktop Experience. Before you
install Server Core, similar to a Desktop Experience install, you must ensure that you:

Disconnect any uninterruptible power supply (UPS) devices.

Back up your server if you’re performing an upgrade.

Disable antivirus software.

Have access to any additional drivers that you need but that the Windows driver store doesn’t
include.

Perform or verify all preparatory steps, including:
o
Updating firmware as needed.
o
Configuring the physical networking infrastructure for server connection.
o
Configuring internal and external storage arrays.
You can then proceed with the installation process, which is almost identical to that for Desktop
Experience installations. However, after you complete the installation, things get a little different.
Instead of being able to sign in locally and use Server Manager to reconfigure the server’s initial
settings, you’re provided only with a command prompt window. You’ll need to configure the
following:

Server name

Network settings

Domain settings

Roles or features

Apps
You must configure the first two of those items on the server locally. However, you can complete
the remaining settings and tasks several ways, including:

Locally, using the Sconfig.cmd command.

Remotely, by using Server Manager on another computer and adding your Server Core server
to Server Manager or to Windows Admin Center.

Remotely, by using Windows PowerShell remoting, assuming that WinRM is enabled through
the firewall.
Manage and service Server Core
You can use the Sconfig.cmd from the Server Core command prompt to configure fundamental
settings on your Server Core installation. After starting your server, sign in, and then Sconfig loads
automatically. Table 2 describes the configurable settings.
Tip: The password is blank by default. You must change it the first time you sign in.
64
20740 Installation, Storage, and Compute with Windows Server
Table 2: Sconfig options
Option
Description
Domain/workgroup
Enables you to join an AD DS domain or a workgroup.
Computer name
Enables you to change the auto-generated computer name to something more
meaningful.
Add local
administrator
Enables creation of an additional administrator account on the local computer.
Remote management
Enables (or disables) WinRM by enabling the listening service and creating a
firewall exception. This is turned on by default.
Update settings
Enables you to select an install mode for updates from Windows Update.
Install updates
Checks for, and installs, updates according to how you’ve configured the Update
settings option.
Remote desktop
Enables Remote Desktop through the firewall.
Network settings
Provides access to each installed network adapter by index number. For each
adapter, you can configure the IP settings.
Date and time
Provides a means to change the date and time. Displays the Date and Time
dialog box from Control Panel when selected.
Telemetry setting
Enables you to control whether telemetry data is gathered and forwarded to
Microsoft.
Windows activation
Provides three options: display license information, activate Windows, and install
product key.
Log off user
Signs out the current user.
Restart server
Restarts the server.
Shut down server
Shuts down the server.
Exit to command line
(PowerShell)
Enables you to quit Sconfig and enter Windows PowerShell.
65
20740 Installation, Storage, and Compute with Windows Server
To select an option, enter the corresponding numeric value, and then select Enter.
Tip: If you exit Sconfig, you can use Windows PowerShell to review and change your server’s
configuration.
Some applications require Windows Server features that aren’t available in Server Core, so
Microsoft has provided Server Core App Compatibility Feature on Demand (FOD). You can use FOD
to add any required Windows Server features on your Server Core deployments during app
installations. However, Windows Server doesn’t include FOD by default.
Tip: Visit Server Core App Compatibility Feature on Demand (FOD) for more information on
installing and using FOD.
After you’ve installed FOD, additional components are available on your server, including:

Device Manager.

Event Viewer.

Failover Cluster Manager.

File Explorer.

Microsoft Management Console.

Performance Monitor.

Resource Monitor.

Windows PowerShell ISE.
You can install FOD in two ways:

Using Windows PowerShell. Run the following PowerShell command, and then restart your
server:
Add-WindowsCapability -Online -Name
ServerCore.AppCompatibility~~~~0.0.1.0
Note: This option requires connectivity to Windows Update.
66
20740 Installation, Storage, and Compute with Windows Server

Downloading the FOD. Use this method if you can’t connect to Windows Update. Use the
following procedure:
a. Download the ISO for FOD.
b. Run the following PowerShell commands:
Mount-DiskImage -ImagePath
drive_letter:\folder_where_ISO_is_saved\ISO_filename.iso
Add-WindowsCapability -Online -Name
ServerCore.AppCompatibility~~~~0.0.1.0 -Source
<Mounted_Server_FOD_Drive> -LimitAccess
c. Restart the server.
Lesson 3: Prepare for upgrades and
migrations
One of the key tasks when deploying Windows Server 2022 is identifying when you should upgrade
an existing Windows Server deployment by using the existing hardware or when you should migrate
the roles and features to a clean installation of Windows Server 2022 on new hardware.
You will also want to use available guidance documentation and tools to determine which options
are most suitable and use tools to automate the process. This lesson describes the considerations
for performing an in-place upgrade or migrating to a new server. It also provides scenarios you can
compare to your current business requirements and explains the benefits of migrating to a clean
installation of Windows Server 2022. The lesson also provides you with information about tools
and guidance you can use to assess your own environment and help you deploy Windows Server
2022.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe in-place upgrades and server migration.

Explain when to perform an in-place upgrade.

Describe migration benefits.

Migrate server roles and data within a domain.

Migrate server roles across domains or forests.

Describe solution accelerators for migrating to the latest Windows Server edition.

Describe considerations and recommendations for server consolidation.
In-place upgrades vs. server migration
If you’re deploying Windows Server 2022 into a network that has older Windows Server versions,
you can choose between upgrading existing servers or migrating to new hardware. There are pros
and cons to both approaches.
67
20740 Installation, Storage, and Compute with Windows Server
In-place upgrade scenarios
An in-place upgrade is when your network includes servers that already have an earlier version
of Windows Server installed, but you want to install Windows Server 2022 on the servers. When
you perform an upgrade, Windows Server 2022 replaces the current OS while preserving the
applications, data, and settings from the old OS. It’s important to note that after an upgrade
occurs, you can no longer sign into the previous OS.
Precautions to take before running an in-place upgrade
Before attempting an upgrade, make a list of the hardware and software running on the server.
Although unlikely, there could be applications or hardware that were supported in the old OS,
but that Windows Server 2022 doesn’t support. You should research compatibility to verify that
hardware and software will be compatible with Windows Server 2022. Check with the application
vendors to confirm whether you need to update applications so they’ll run on Windows Server
2022 or if you need to purchase new versions.
It’s imperative that you create a complete backup of the server before attempting the upgrade. In
rare situations, the upgrade can fail, and you might find you can’t sign into the old OS or Windows
Server 2022. In that case, you can restore the old OS and data to the server from your backup.
Be aware that during the upgrade process, the server will be unavailable on the network. You
might have to either temporarily move any critical services to another server or perform the
upgrade during the hours when the network typically isn’t in use, such as at night or on weekends.
The fact that in an upgrade everything is imported from the old OS can also be a disadvantage
because it imports everything you want, but also potentially imports things you don’t want. For
example, if the old OS has a virus, it’ll import to Windows Server 2022. Be sure to run a full
antivirus scan on the server before running the upgrade.
Similarly, any unsecured settings or corrupt configuration files in the registry will get imported into
Windows Server 2022.
What is migration?
Migration refers to performing a clean install of Windows Server 2022 on new hardware and then
migrating settings, applications, and data from the old server to the new one.
Migrations are useful because sometimes upgrades either aren’t suitable or possible. For
example, if the existing server has hardware or software that isn’t compatible with Windows Server
2022, it won’t be possible to run an upgrade. Even if there are no compatibility issues, you might
simply want to move to newer, more powerful hardware. In those two scenarios, you would perform
a migration, rather than an upgrade.
Unlike an upgrade, you can perform migrations without disrupting existing network functionality.
The new servers can be set up in an isolated environment until ready and then deployed into the
production environment.
68
20740 Installation, Storage, and Compute with Windows Server
A migration begins with a clean install of Windows Server 2022 on a new server. After that’s
complete, on your new Windows Server 2022 server, you’ll need to:

Install any required applications. If an application was running on the old server that’s being
replaced, you should check with application vendors whether there’s a newer version of the
application you need to use in Windows Server 2022.

Configure any roles or features that were enabled in the old server, such as Dynamic Host
Configuration Protocol (DHCP) or file sharing, on the new Windows Server 2022 server.

Copy any data that was stored on the old server to the new server.

Configure any shared folders or printers that were configured on the old server.
Windows Server 2022 includes tools to help with migrations. Collectively, these are known as
Windows Server Migration Tools, which a later section of this lesson details.
When to perform an in-place upgrade
An in-place upgrade involves upgrading a Windows Server OS on the server that’s running an
earlier Windows Server edition. A benefit of an in-place upgrade is that you avoid hardware
expenses because you install Windows Server 2022 on the existing hardware. Another benefit is
that files, settings, and programs remain intact on the server.
You would choose an in-place upgrade of the Windows Server OS in the following scenarios:

When the hardware configuration of the existing servers meets the requirements for Windows
Server 2022. Because the hardware requirements for Windows Server 2022 don’t differ
significantly from those for Windows Server 2016 or 2019, you can perform an in-place
upgrade on those servers.

When the software products that run on the existing servers support an in-place upgrade of
Windows Server 2019 or an earlier version. Before performing an in-place upgrade, you must
list all software products that are running on the server, such as SQL Server, Exchange Server,
non-Microsoft software, and antivirus software. Next, verify that these products support an inplace upgrade of Windows Server. If so, refer to the specific product’s documentation to
determine how to perform an in-place upgrade, including any issues or risks that might occur.

When you want to keep all user data that’s on the existing servers, such as data stored on file
servers, and security permissions for accessing those data. When performing an in-place
upgrade, user data and security permissions for accessing the data remain unchanged. This
scenario is convenient because after the in-place upgrade, users can continue to access their
data on the same file servers.

When you want to install Windows Server 2022 but want to keep all roles, features, and
settings of the existing server. Before performing an in-place upgrade on a server that has
specific roles, features, or settings—such as Dynamic Host Configuration Protocol (DHCP), DNS,
or AD DS—list those configurations. Then, check if those configurations support an in-place
upgrade of Windows Server. If so, refer to the detailed instructions for the specific roles,
features, or settings on how to perform the in-place upgrade, including any issues or risks that
might occur.
69
20740 Installation, Storage, and Compute with Windows Server
If any of these scenarios don’t meet your organization’s requirements, then you should perform a
migration to Windows Server 2022.
Migration benefits
When deploying Windows Server 2022, some organizations should consider migration instead of
an in-place upgrade. There can be risks that arise from an in-place upgrade, such as server
unavailability or data being inaccessible. Therefore, your organization might choose to perform a
migration because of the following benefits:

You will deploy servers with the Windows Server 2022 OS installed, and they won’t affect the
current IT infrastructure. When you install Windows Server 2022, you can perform tests, such
as drivers or system performance tests, before you introduce that server to the domain. In this
way, the process of installation and testing is less likely to affect your current IT infrastructure.

You will perform software product migration in a separate environment. For any software
solution with an earlier Windows Server edition, you must refer to the product documentation
for information about how to migrate that solution to Windows Server 2022. In some
scenarios, software products that you’re using aren’t supported for installation on Windows
Server 2022, and you will require newer editions of those software products. In this case, by
using migration, you can perform systematic installation of the OS and the software products in
a separate environment. This ensures that the migration doesn’t affect the availability of
current services that the software provides.

You will perform migration of server roles, features, and settings in a separate environment.
As with the migration of software products, refer to the documentation on how to migrate the
specific roles, features, or settings, such as DHCP, DNS, or AD DS, to Windows Server 2022.
Again, migration enables you to perform systematic configuration in a separate environment,
which means that the migration shouldn’t affect the availability of server roles, features, and
settings.

New OS enhancements are installed by default. When performing an in-place upgrade, for
compatibility reasons, Windows Server 2022 is configured with the settings of the version
being upgraded. This means that many enhancements that Windows Server 2022 introduces,
such as security, functionality, or performance enhancements, aren’t enabled by default. When
performing a migration, Windows Server 2022 deploys as a clean installation with all new
enhancements installed. This ensures that the OS is more secure and has new functionalities
installed by default.
Migrate server roles and data within a domain
You can migrate many roles and features by using Windows Server Migration Tools, a feature built
into Windows Server for migrating roles and features, whereas file servers and storage can be
migrated using Storage Migration Service.
70
20740 Installation, Storage, and Compute with Windows Server
Storage Migration Service overview
Storage Migration Service enables you to migrate data from multiple sources to either an onpremises physical computer running Windows Server or a VM on Hyper-V or in Microsoft Azure. The
primary usage scenario for Storage Migration Service is to migrate an existing file server to a new
file server. Unlike simply copying files from one server to another, Storage Migration Service
enables you to preserve all settings and even create a new VM if you move your data to Microsoft
Azure. With Storage Migration Service, you can also move data from Linux and NetApp Common
Internet File Server (CIFS) servers and then transfer the data to new servers.
The graphical management interface of Storage Migration Service is integrated as part of Windows
Admin Center. Windows Admin Center provides the user interface for configuring Storage Migration
Service but doesn’t manage the migration process. Alternatively, you can configure Storage
Migration Service using Windows PowerShell cmdlets.
One of the key benefits of the Storage Migration Service is that, during the migration process, it
assigns the identity of the source server to the target server. This includes the server name and
the server Internet Protocol (IP) addresses. By doing this, clients and apps configured to access a
share on the source server can automatically begin using the migrated data on the target server
without needing to update drive mappings or file share names in scripts or apps.
Storage Migration Service can also migrate local user accounts from a source to a destination
server. This can be useful if you have local user accounts created for administrative access or
applications.
When you migrate data with Storage Migration Service, the process goes through several phases,
as follows:
1. Storage Migration Service makes an inventory of source servers to obtain data about their files
and configuration.
2. Storage Migration Service performs a data transfer.
3. Storage Migration Service performs an identity cutover (optional phase).
After cutting over, the source servers are still functional but are in a maintenance state, which
means they aren’t accessible to users and apps by their original names and IP addresses. The files
are still available to the administrators if required; you can decommission the source servers when
you’re ready.
Server Migration Tools overview
Windows Server provides a set of Windows PowerShell cmdlets called Windows Server Migration
Tools. These cmdlets migrate configuration information and data from a source server to a
destination server. In most cases, you do this to migrate server roles and features from a server
that you plan to retire to a new server running a more recent version of the OS.
71
20740 Installation, Storage, and Compute with Windows Server
Windows Server Migration Tools lets you migrate the following roles and features:

IP configuration

Local users and groups

DNS

DHCP

Routing and Remote Access
In some cases, you can use Windows Server Migration Tools to migrate file shares and some other
roles, but that depends on each specific configuration. However, as a best practice for migrating
shares, use Storage Migration Service instead.
Migrations with Windows Server Migration Tools are supported between physical and virtual
computers and between installation options of Windows Server with either Server with Desktop
Experience or Server Core. Source servers must be running Windows Server 2008 or newer. To
migrate from a server core installation of Windows Server, you must have Microsoft .NET
Framework installed.
Windows Server Migration Tools is a feature of Windows Server. You install it by using the Add
Roles and Features Wizard or PowerShell.
To perform a migration using Windows Server Migration Tools, required cmdlets should be
installed on both source and destination servers. Because Windows Server Migration Tools is a
feature of Windows Server, you can install it using graphical tools such as Windows Admin Center
or Server Manager. Alternatively, you can install Windows Server Migration Tools using the
Windows PowerShell cmdlet Install-WindowsFeature Migration.
Learn more: To learn more about Windows Server Migration Tools, refer to Install, Use, and
Remove Windows Server Migration Tools.
Migrate server roles across domains or forests
Organizations could choose to deploy Windows Server 2022 in a new AD DS forest. In this
scenario, administrators should plan the migration steps carefully to provide users with seamless
access to data and services during the migration process. After the migration is complete,
administrators should begin the process of decommissioning and removing the infrastructure
of the previous OS environment.
Microsoft offers the Active Directory Migration Tool (ADMT) to simplify migrating user, group,
and computer accounts between domains. If effectively managed, you can complete a migration
without significantly impacting user productivity. ADMT supports both a graphical and a scripting
interface, and you can use it to perform the following tasks, which are commonly part of a domain
migration:

User account migration.

Group account migration.
72
20740 Installation, Storage, and Compute with Windows Server

Computer account migration.

Service account migration.

Trust relationship migration.

Microsoft Exchange Server migration.

Security translation of migrated computer accounts.

Rollback and retry operations if transient failures occur.

Generating reports illustrating a migration’s progress.
If you’re defining a new naming structure for users or groups, you can implement that as part of
the migration process. Use an included file, which identifies the desired source objects and the
destination names. You can’t rename computer accounts during migration.
When you migrate user objects, all attributes are migrated by default. It’s possible to filter out
attributes in the migration process if the attributes’ values are no longer valid. For example, you
can do this for attributes used by retired applications. If there are application-specific attributes,
you’ll need to research the best approach for migrating them.
You should install ADMT on a member server with Desktop Experience in the target AD DS forest.
Before installing ADMT, you must install Microsoft SQL Server, which stores migration information.
You can also use SQL Server Express for ADMT, but you should monitor the size of the database to
ensure that it doesn’t reach its maximum limit and stop functioning.
If you’re using password synchronization, you must install Password Export Server (PES) on a DC
in the source. PES is responsible for exporting user password hashes from the source to the target.
If you don’t use PES, migrated user accounts are configured with a new password that’s stored in
a text file. Without password synchronization, you need a process to distribute new passwords
to users.
Solution accelerators for migrating to the latest Windows
Server edition
When you’re considering deployment of Windows Server, you can use software tools to help design
and plan your server deployments. These tools include deployment accelerators:

Microsoft Deployment Toolkit (MDT). Enables you to automate the deployment of Windows
Server.

Microsoft Assessment and Planning Toolkit (MAP). Provides guidance on assessing your
organization’s infrastructure in readiness for Windows Server.
73
20740 Installation, Storage, and Compute with Windows Server
How to use MDT
MDT enables you to deploy Windows Server and Windows client operating systems to computers
with or without existing operating systems installed. By itself, MDT delivers a lite-touch deployment
process. However, when combined with additional technologies, such as Endpoint Configuration
Manager and Windows Deployment Services, you can deliver a zero-touch deployment experience
for your Windows deployments. By using MDT, you can create task sequences that perform the
required deployment steps, including preparing the target server’s hard disk, applying an
operating-system image, and installing required apps during the deployment process.
Tip: Visit Microsoft Deployment Toolkit documentation to download MDT for free.
How to use MAP
MAP is a solution accelerator that you can use to gather and analyze inventory data about your
Windows Server infrastructure. You can then use the analysis to determine whether your
organization is ready for Windows Server 2022.
You can also use MAP to perform the following tasks:

Perform an analysis of your organization’s readiness to shift infrastructure to the Microsoft
cloud, including:
o
o
Microsoft 365 readiness.
o
Private cloud fast track analysis.
o

Azure VM readiness.
Azure VM capacity.
Gather inventory about desktop deployments and determine readiness for Windows client and
Microsoft Office.

Gather inventory about server deployments and determine readiness for Windows Server.

Report on your organization’s desktop and server virtualization status.

Discover and report on your SQL Server deployments, including which servers’ workloads could
shift to Azure.

Gather inventory and report on:
o
Number of users and devices.
o
Windows Server instances.
o
SQL Server deployments.
o
Active users and devices.
o
SharePoint Server deployments.
o
Exchange Server deployments.
o
o
Configuration Manager instances.
Windows volume licensing and Remote Desktop licensing.
74
20740 Installation, Storage, and Compute with Windows Server
Tip: MAP is also available as a free download from the Microsoft Download website.
Considerations and recommendations for server
consolidation
When deploying Windows Server 2022, you should plan your placements of server roles, such as
AD DS, DNS, and DHCP, to make the best use of hardware and network resources. Organizations
should consider cohosting multiple roles where possible, to achieve the most economical solution.
Virtualization is also considered as a consolidation of the server roles. However, you shouldn’t
implement cohosting if it affects server performance or available disk space. Therefore,
organizations should evaluate and test whether installing multiple server roles on a server would
result in lower overall performance and disk usage. Furthermore, organizations should evaluate
the security risks of collocating server roles. For example, you shouldn’t collocate the server that
hosts the root Active Directory Certificate Services role with other server roles, and it should be
offline most of the time.
Smaller organizations should consider the following best practices:

Plan which server roles you need. If the OS supports cohosting those roles on one server, you
can install multiple roles and cohost them on a single server. If cohosting multiple server roles
on one physical server affects the performance of the physical server, administrators shouldn’t
cohost the server roles and should install server roles on different physical servers.

If the OS on a physical host doesn’t support cohosting of multiple server roles, administrators
should deploy server roles on multiple physical servers.
Medium and large organizations should consider the following performance and high-availability
issues when cohosting:

If you’re cohosting multiple roles on a single server, there might be performance issues
because of the large number of client computers that are connecting to that server. In this
situation, organizations should consider adding multiple servers that cohost the same multiple
roles. They also should consider relocating some roles from the first server to the other
physical servers.

High-availability configurations of roles have specific requirements and settings, which might
not support cohosting of multiple roles. In this situation, organizations could have a highavailability solution for one server role but then need to locate the remaining roles on other
servers.
75
20740 Installation, Storage, and Compute with Windows Server
Lesson 4: Windows Server activation
models
To prove that your organization has purchased appropriate licenses for Windows Server 2022, you
must activate the server. It’s important that you understand the licensing models for different
editions of Windows Server.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe licensing models for Windows Server.

Describe Windows Server activation.
Licensing models overview
For Windows Server 2022 you need to purchase licenses based on the number of processor cores,
rather than the number of servers. As part of planning your deployment, you must ensure you have
the proper number of licenses for your Windows Server 2022 installation.
Windows Server 2022, similar to Windows Server 2019, is licensed by processor core and not by
server, except for the Essentials version.
All your Windows Server computers must meet the following requirements:

All physical cores must be licensed.

There must be eight core licenses per processor.
Important: Servers with more processors require more core licenses.
For example, a server with four processors needs to have 32 core licenses. This is because each
processor needs eight core licenses.
Tip: You can purchase licenses in two-core packs or 16-core packs.
76
20740 Installation, Storage, and Compute with Windows Server
Table 3 lists the available editions of Windows Server 2022 and their licensing models:
Table 3: Windows Server 2022 licensing
Windows Server
2022 Edition
Ideal for
Licensing model
Client access
license (CAL)
requirements
Datacenter
Highly virtualized
datacenters and cloud
environments
Core-based
Windows Server CAL
Standard
Physical or minimally
virtualized environments
Core-based
Windows Server CAL
Essentials
Small businesses with
up to 25 users and 50
devices
Specialty server (server
license)
No CAL required
When planning for Windows Server licensing, you should follow these guidelines:

For each Windows Server Datacenter and Standard edition server, you must have a minimum
of a single base license which covers 16 core licenses. If you later decide to upgrade the
number of CPUs or cores, then you must also purchase additional licenses to cover those
upgrades.

All physical cores installed in a server must be licensed. If you have a server with more than 16
cores, you need to buy additional core licenses, which can be purchased in packs of 2, 4, and
16 cores.

Windows Server 2022 Standard Edition provides licenses for up to two VMs. If you want to run
more than two VMs on a Windows Server Standard, you need additional core licenses. Also, for
this scenario, you can purchase them in packs of 2, 4, or 16 cores.

Windows Server 2022 Standard and Windows Server 2022 Datacenter require CALs for every
user or device that connects to the server.
What is Windows Server activation?
To ensure Windows Server software is legitimate, you must activate it after installation. You can
activate it manually or automatically. Manual activation requires that you enter a product key. You
can use a retail product key to activate a single Windows Server computer. Alternatively, you can
use a multiple activation key (MAK) to activate a specific number of Windows Server computers.
77
20740 Installation, Storage, and Compute with Windows Server
Manual activation
To manually activate a server, you need to enter a product key. Product keys might only be valid for
a single server, or your organization might have licenses that allow multiple activations with the
same key. To enter the activation key, an administrator can select the Start button and enter
activation. When it displays on the Start menu, select Activation settings. Select Change product
key, and then enter the product key.
Automatic activation
If you’re deploying a large number of servers, manual activation becomes too labor-intensive, and
so you need to use the volume licensing options that Microsoft provides, which include:

Deploying a KMS (Key Management Services) server. When you deploy KMS in your network,
any new servers you deploy will automatically activate themselves by contacting the KMS
server. KMS only works if you’re deploying at least five new servers into the network.

Active Directory-Based Activation. With this service, activation information is stored in Active
Directory on servers configured as domain controllers (DCs). Computers on the network
contact a DC to activate themselves.

Volume Activation Services server role deployed on an existing server. Volume Activation
Services supports both KMS- and Active Directory-Based Activation.

Multiple Activation Key (MAK). When activated with a MAK key, a server contacts Microsoft on
the internet to verify licensing. If some servers on your network don’t have access to the
internet, you can deploy a MAK proxy server that contacts Microsoft on behalf of computers
that need to be activated.
Tools to help with managing licensing and activation include:

Volume Activation Management Tool (VAMT). Supports both KMS and MAK and provides a
graphical management console. You can download and install VAMT on a client computer
or a server.

Automatic Virtual Machine Activation. This option is specifically for servers hosting VMs. When
configured, VMs running on the host are automatically activated, without having to enter
product keys for each VM.
Lab 1: Install and configure Windows Server
Please refer to the online lab to supplement your learning experience with exercises.
78
20740 Installation, Storage, and Compute with Windows Server
Knowledge check
Check your knowledge by answering these questions:
1. You used the Install-WindowsFeature cmdlet in Windows PowerShell to install the DNS
Server role. How could you do this remotely?
2. What major advantages does Server Core have compared to a full Windows Server 2022
installation?
3. Five years ago, your organization bought a new rack-mount server and installed Windows
Server 2016 on it. You now want to install Windows Server 2022 via the upgrade method.
What should you do?
4. Which role can you use to manage KMS?
Note: To find the answers, refer to the Knowledge check slides in the PowerPoint
presentation.
Learn more
For more information, refer to:

Microsoft Assessment and Planning Toolkit

Windows Server servicing channels
79
20740 Installation, Storage, and Compute with Windows Server
Module 2: Configure local storage
As more of our world becomes digitized, storage needs are expanding exponentially. The reasons?
Space-hungry data, such as graphics and videos, users creating more documents and using more
apps, and organizational and legal requirements to keep data for longer periods of time. Storage
needs increase even more when you couple this growth with the need to back up all that data, and
often, the need to preserve multiple versions of the same data.
This module introduces the basics of disks that you can use for storage and the types of volumes
you can create on a disk. It also covers the distinct types of file systems supported by Windows
Server 2022, and how to provide faster and more fault-tolerant storage by utilizing Redundant
Array of Independent Disks (RAID).
By completing this module, you’ll achieve the knowledge and skills to:

Explain how to manage disks in Windows Server.

Explain how to manage volumes in Windows Server.
Lesson 1: Manage disks in Windows Server
Identifying which storage technology to deploy is the first critical step in addressing the datastorage requirements of your organization. However, this is only the first step. You also must
determine the best way to manage that storage, and which disks you’re going to allocate to a
storage solution along with which file systems you’ll use.
By completing this lesson, you’ll achieve the knowledge and skills to:

Select a partition table format.

Select a disk type.

Select a file system.

Implement and configure Resilient File System (ReFS).

Use .vhd and .vhdx file types.
Select a partition table format
Before Windows can use a disk, you must partition that disk. Partitions are physically and logically
separate areas on the same disk. For example, you could install different operating systems on
each partition. This would allow your computer to boot into different operating systems, such as
Windows and Linux.
You could also use partitions to separate different file types. For example, you could store the
operating system (OS) in one partition, and your data in another. In this scenario, if you needed to
reinstall the OS, you could simply do that on the partition containing the OS, without affecting the
partitions that contain your data.
80
20740 Installation, Storage, and Compute with Windows Server
With multiple partitions, you can use different file systems in each partition. If you need to use
multiple file systems, you could achieve this with multiple partitions. Later in this module, you’ll
learn about the file systems available in Windows Server 2022.
Partition tables store data about partitions, such as their physical location on the disk and which
partition the system should boot from. These standards define where the partition tables are
stored on the disk and the number and size of partitions that are possible.
Current industry standards define two methods for partitioning a disk: master boot record (MBR)
and globally unique identifier (GUID) partition table (GPT). Windows Server 2022 allows you to
initialize a disk either as an MBR or a GPT disk.
MBR
If a disk is initialized as an MBR disk, there’s only one copy of the partition table, which must be
stored in sector 0 of the disk (sector 0 is the first available space on the disk). The problem with
this is that it creates a single point of failure. If sector 0 becomes physically damaged, the system
can’t boot because partition information isn’t available.
Apart from that vulnerability, MBR disks have the following limitations:

There’s a limit of four partitions per disk.

The maximum disk size supported is 2 terabytes (TB). If the disk is larger than 2 TB and is
initialized as MBR, any space beyond the 2 TB is unusable.
MBRs and primary and extended partitions
A primary partition is a partition that you can make bootable. This allows you to put different
operating systems on each primary partition, and then designate which primary partition is the
active primary partition. When the system starts, it will attempt to boot into whatever OS is in the
active primary partition.
The extended partition concept was introduced to partially get around the MBR four-partition limit.
In an extended partition, you can create multiple logical drives. Each logical drive is just like a
primary partition, except in one respect: you can’t make it bootable.
If you create three primary partitions, and the fourth partition you create doesn’t use up all the
remaining space on the disk, Windows automatically makes it an extended partition. This helps
utilize any leftover space that would otherwise become unusable. By configuring this space as an
extended partition, Windows ensures that any free space in the extended partition can be used by
creating logical drives within the extended partition.
81
20740 Installation, Storage, and Compute with Windows Server
GPT
The GPT standard was introduced to replace the aging MBR standard and its various limitations.
For example, because GPT disks store the partition table in two areas of the disk for fault
tolerance, the single point of failure on an MBR disk is no longer an issue. GPT disks also
overcome the MBR disk size and partition limitations.
With GPT disks:

You can create up to 128 partitions per disk.

Disk and partition sizes can be up to 18 exabytes (EBs) in size.
Note: The terms partition and volume are often used interchangeably. Microsoft tends to use
the term volume for dynamic disks and partitions for basic disks. One major difference
between partitions and volumes is that volumes can span multiple physical drives, whereas
partitions consist of space on one disk.
Select a disk type
Windows Server 2022 supports two types of disk configurations: basic and dynamic. A basic disk is
a physical disk that can contain up to four primary partitions, or three primary partitions and an
extended partition with multiple logical drives. Basic disks follow industry standards. You can move
a basic disk to another computer that’s running a different OS. The disk would still be readable
and usable by the OS in the destination computer.
Microsoft introduced the concept of dynamic disks in Windows 2000 and isn’t an industry
standard. A dynamic disk is a physical disk that stores configuration data in a nonstandard
database that other operating systems don’t process. You can configure both MBR and GPT
disks as basic or dynamic.
Dynamic disks support the creation of volumes that span multiple disks, such as striped, spanned,
or RAID 5 volumes. If you have multiple dynamic disks installed in a computer, the configuration
data for the dynamic disks replicates to all the other dynamic disks. This provides a degree of fault
tolerance should the database on one disk become corrupted. You can reconfigure dynamic disks
without requiring a reboot.
Windows configures all disks as basic, by default. However, you can convert a basic disk to a
dynamic disk without losing any data. To revert the disk back to basic, though, you’ll first need to
delete all volumes on that disk. If you want to preserve the data on the volumes, you should back
them up before deleting them.
Note: If you want to create spanned, striped, mirrored, or RAID-5 volumes, you must use a
dynamic disk. This means that you must convert the basic disk to a dynamic disk prior to
creating the volumes.
82
20740 Installation, Storage, and Compute with Windows Server
Physical disk types
There are many types of physical disks available to install on servers. As technology has evolved,
these disks have become larger and faster. However, until the introduction of solid-state drives
physical disks were all mechanical drives with a spinning magnetic platter and a read/write head
that read and wrote data to the platter.
Recent disk development has focused on the interface technology used to connect the disk to the
computer. Ranked by age and speed, the most prominent technologies are:

Enhanced Integrated Drive Electronics (EIDE). EIDE is a very old standard and one you’re
unlikely to find deployed in servers today. Apart from being slow, EIDE is limited to disks that
are 128 gigabytes (GB) or smaller. The data transfer rate is 133 megabytes per second (MB/s).

Serial Advanced Technology Attachment (SATA). SATA was developed as a replacement for
EIDE. It provides an interface that you can use to connect hard disks or optical drives. The
latest version of SATA, SATA 3, can transfer data up to 600 MB/s. Earlier versions of SATA
support 150 MB/s and 300 MB/s. SATA is one of the least expensive options for storage.
However, while it’s good for storing large amounts of data, there are other higher-performance
options.

Small computer system interface (SCSI). SCSI provides faster speeds than SATA. Onboard
processors in the controller cards can offload work from the computer’s processors. SCSI
drives offer high reliability and features such as hot swapping—the ability to replace disks
without having to shut down the server. SCSI drives can operate at up to 640 MB per second.

Serial attached SCSI (SAS). SAS is an evolution of SCSI that moves from parallel
communication to faster serial communication between the disk and computer. The latest
version, SAS-5, provides speeds of up to 45 gigabytes per second (GBps). SAS also provides
backward compatibility with SCSI and interoperability with SATA drives. SAS has become the
dominant technology in datacenters.

Solid-state drive (SSD). SSD refers to the technology within the disk, rather than a connection
interface. Most SSDs use the SATA interface. SSDs have no moving parts and are therefore
much more resilient than mechanical hard disk drives (HDDs). Instead of a spinning magnetic
platter, data is stored in solid state memory. This means less power is consumed, less heat is
generated, and much faster speeds than HDDs. SSDs used to be much more expensive than
HDDs, but in the last few years prices have dropped dramatically and they are now virtually the
same price.
Select a file system
After you initialize a disk as MBR or GPT and configure the disk as basic or dynamic, the next step
is to create a partition or volume. On a basic disk, you can only create a partition, which is the
industry standard and must reside entirely on one disk. However, on a dynamic disk, you can
create volumes, which are more versatile than partitions, as they can span multiple disks and be
reconfigured dynamically without having to reboot the system.
83
20740 Installation, Storage, and Compute with Windows Server
When you create a volume or partition, you must format it with a file system. A file system creates
and manages an index of files stored on the volume. Some file systems provide other functions as
well, such as setting permissions on files and folders or encrypting specific files and folders.
Windows Server 2022 supports three types of file systems.
FAT file systems

File allocation table (FAT). FAT is an older file system originally used with the earliest
computers and, as such, has several limitations. For example, it has no built-in security
features, which means anyone with access to the disk can access all the data on the disk.
Another limitation is the maximum volume size, which is 2 GB. However, the release of
FAT 32 addressed this size limitation.

FAT32. FAT32 expanded the maximum volume size from 2 GB to 2 TB. Otherwise, the file
system is the same as FAT. For example, FAT32 still provides no security features.

Extended FAT (exFAT). exFAT is a variation of FAT32. It provides for volumes and files of up to
128 petabytes (PB). It’s primarily used for removable media, such as USB flash drives.
NTFS
New Technology File System (NTFS) was introduced with Windows NT. With volumes that can
theoretically be up to 8 PB in size, NTFS supports much larger disks and volumes than FAT, FAT32,
or exFAT.
NTFS improves upon the FAT options in the following ways:

Setting permissions. NTFS allows for setting permissions on files and folders. This helps
improve security by enabling administrators to control who has access to data.

Auditing. NTFS supports auditing. If set up, auditing allows administrators to track who
accessed a particular file or folder. They can also find out what they did with it, such as
creating, editing, or deleting files or folders.

File compression. NTFS supports file compression, which helps save on disk space.

Greater reliability. NTFS is designed to be more reliable than the FAT options. It uses
transaction logging to recover from errors caused by sudden power failures. When the system
restarts after a power failure, the transaction log is checked and any operations that were
aborted because of the power failure are either completed or cleanly rolled back.
Resilient File System (ReFS)
ReFS was introduced in Windows Server 2012. It’s designed to provide fault tolerance that can
more precisely detect and fix corruptions as they occur, with proactive error correction.
84
20740 Installation, Storage, and Compute with Windows Server
ReFS also offers the following improvements over NTFS:

Performance. ReFS provides better performance than NTFS.

Storage. ReFS can support volume and files up to 35 PBs.

Virtual machine use. Microsoft recommends using ReFS for virtual hard disks (VHDs) in virtual
machines (VMs), the Storage Spaces Direct feature, and for archiving data.
However, ReFS has some limitations:

It can’t be used for the system volume.

It doesn’t support file compression or file encryption. If you need those features, you should
choose NTFS.

You can’t move a disk formatted with ReFS to a computer running an OS older than Windows
Server 2012 or Windows 8.1. The ReFS disk won’t be recognized because older operating
systems didn’t support ReFS.
Allocation unit size
When creating a volume, you’re prompted to format the volume with a file system. Formatting a
volume creates the file system in the chosen volume. This process also requires choosing the
allocation unit size.
The allocation unit size is Microsoft’s terminology for what is more commonly known in the industry
as the cluster size. The cluster size sets the minimum amount of space that a file will use. Any file
you save to the volume will use one or more clusters. For example, imagine a file that’s just 1 byte
in size. If you’ve chosen a cluster size of 512 bytes, when you save that file, it’ll use that entire
cluster. That means that 511 bytes are wasted. However, if you choose a cluster size of 4,096
bytes when creating the volume, that same 1-byte file would have taken up 4,096 bytes.
Typically, a smaller cluster size makes more efficient use of disk space, and a larger cluster size
provides better performance. If you’re unsure which size to choose, select default as the cluster
size. In that case, Windows will choose a cluster size based on the size of the volume you’re
creating.
Implement ReFS
ReFS is a file system that’s based on the NTFS file system. It provides the following advantages
over NTFS:

Metadata integrity with checksums.

Expanded protection against data corruption.

Maximizes reliability, especially during a loss of power (while NTFS has been known to
experience corruption in similar circumstances).

Large volume, file, and directory sizes.

Storage pooling and virtualization, which makes creating and managing file systems easier.
85
20740 Installation, Storage, and Compute with Windows Server

Redundancy for fault tolerance.

Disk scrubbing for protection against latent disk errors.

Resiliency to corruption with recovery for maximum volume availability.

Shared storage pools across machines for additional failure tolerance and load balancing.
ReFS also inherits some features from NTFS, including the following:

BitLocker Drive Encryption.

Access control lists (ACLs) for security.

Update sequence number (USN) journal.

Change notifications.

Symbolic links, junction points, mount points, and reparse points.

Volume snapshots.

File IDs.
ReFS uses a subset of NTFS features, so it maintains backward compatibility with NTFS. Therefore,
programs that run on Windows Server 2022 can access files on ReFS just as they would on NTFS.
However, a ReFS-formatted drive isn’t recognized when placed in computers that are running
Windows Server operating systems that were released before Windows Server 2012. You can use
ReFS drives with Windows 11, Windows 10, and Windows 8.1, but not with Windows 8.
NTFS enables you to change a volume’s allocation unit size. However, with ReFS, each volume has
a fixed size of 64 kilobytes (KBs), which you can’t change. ReFS doesn’t support Encrypted File
System (EFS) for files.
As its name implies, the new file system offers greater resiliency, meaning better data verification,
error correction, and scalability.
Compared to NTFS, ReFS offers larger maximum sizes for individual files, directories, disk
volumes, and other items. However, you should also be aware of features that ReFS doesn’t
support, but which NTFS does support.
86
20740 Installation, Storage, and Compute with Windows Server
Table 1 lists these features:
Table 4: Features supported by ReFS and NTFS
Functionality
ReFS
NTFS
File-system compression
No
Yes
File-system encryption
No
Yes
Transactions
No
Yes
Object IDs
No
Yes
Offloaded Data Transfer (ODX)
No
Yes
Short names
No
Yes
Extended attributes
No
Yes
Disk quotas
No
Yes
Bootable
No
Yes
Page file support
No
Yes
Supported on removable media
No
Yes
ReFS is ideal in the following situations:

Microsoft Hyper-V workloads. ReFS has performance advantages when using both .vhd and
.vhdx files.

Storage Spaces Direct. In Windows Server 2022, nodes in a cluster can share direct attached
storage. In this situation, ReFS provides improved throughput, but also supports higher
capacity disks used by the cluster nodes.

Archive data. The resiliency that ReFS provides means it’s a good choice for data that you want
to retain for longer periods.
Learn more: To learn more about ReFS, refer to Resilient File System (ReFS) overview.
87
20740 Installation, Storage, and Compute with Windows Server
Demonstration: Configure ReFS
For high-level demonstration steps, refer to the Notes, which are in the accompanying Microsoft
PowerPoint presentation.
Use .vhd and .vhdx file types
A VHD is a file that emulates a physical disk. When attached to a physical computer or a VM, it
presents as just another disk and, just like with a physical disk, you can:

Configure it as MBR or GPT, and basic or dynamic.

Create partitions or volumes on it.
Types of VHD files
VHD files either have a *.vhd or *.vhdx extension, with .vhdx being the newer format. .Vhd files
have a maximum size of 2 TB, whereas .vhdx files can be up to 64 TB in size and offer better
resiliency and recovery from errors.
How to use a VHD file
The most common use of VHD files is to provide storage for VMs. You can, however, also create
VHD files for use on physical computers. If you create and attach a .vhd or .vhdx file, it will present
as another drive in Windows File Explorer. You can create and delete folders and files as you would
on a real disk. It’s also easy to back up or move the file to another computer because it’s just a
single file. You simply detach the virtual disk file and then copy or move the file to another
computer.
You can use the VHD boot feature to boot an OS. This enables you to configure multiple operating
systems on the same computer.
You can also easily move or copy VHDs from one computer to another. This allows you to, for
example, create a VM on one computer and then copy it to another to create an additional VM.
You could also use Microsoft Hyper-V to create standard images of servers or client computers,
and then deploy them to other computers.
Create and attach VHDs using the graphical Disk Management console or the diskpart.exe
command-line tool. After you create and attach a VHD, use either tool to create partitions or
volumes on the VHD.
88
20740 Installation, Storage, and Compute with Windows Server
Lesson 2: Manage volumes in Windows
Server
A volume is a usable area of space on one or more physical disks, formatted with a file system. In
Windows Server 2022, you can choose to use several different types of volumes to create highperformance storage, fault-tolerant storage, or a combination of both. This lesson explores how
create and manage volumes in Windows Server 2022.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe disk volumes.

Describe options for managing volumes.

Explain how to extend and shrink a volume.

Explain RAID.

Describe RAID levels.
What are disk volumes?
Before you can use a disk, you must create partitions or volumes on it. Partitions are created on
basic disks and are contained in one physical disk. Volumes are created on dynamic disks and can
be spread across multiple physical disks.
Windows Server 2022 allows you to create many types of volumes, some of which are designed to
improve performance, provide fault tolerance, or both. Most volume types require that you convert
your disk to a dynamic disk.
Volume types supported by Windows Server 2022 include:

Simple volumes. Don’t require you to convert the disk to a dynamic disk. Instead, simple
volumes consist of contiguous space on only one disk. You can extend the volumes onto
contiguous space to make volumes larger and shrink them to make them smaller. If you try to
extend a simple volume to noncontiguous space or to space on another disk, you’ll be
prompted to convert the disk to dynamic. Both processes, extending and shrinking, are
nondestructive and you don’t lose the data that’s stored on the volume. A simple volume on a
dynamic disk is functionally equivalent to a partition on a basic disk.

Spanned volumes. Are spread across multiple disks. This allows you to create a volume that
uses up space on multiple disks. As such, spanned volumes require dynamic disks. The
volumes display as a normal volume in File Explorer, with a single drive letter and file system.
Spanned volumes, however, don’t provide any performance benefit or fault tolerance. If one of
the disks making up the volume breaks down, you lose the entire volume.

RAID-0 volumes. Also known as striping volumes, are created with a minimum of 2 physical
disks and a maximum of 32. With data striped across the disks, striped volumes provide
enhanced performance through file splitting. Imagine that you’re saving a file to a striped
volume that uses two disks. Crudely speaking, your file is split in half and each half of the file
89
20740 Installation, Storage, and Compute with Windows Server
is written to the two disks concurrently. Therefore, it takes around half the time it would take to
write the file to a simple volume. Similar benefits are realized when you read from the striped
volumes. Although striping enhances performance, it doesn’t provide fault tolerance.

RAID 1 volumes. Also known as mirrored volumes, provide fault tolerance by duplicating the
volume onto two disks. You can take any simple volume and mirror it to another disk. Because
you now have a copy of the volume on two disks, if either disk fails, you still have access to its
mirror on the other disk. However, if either of the disks for mirroring are a basic disk, you’ll be
prompted to convert it to a dynamic disk. You can’t shrink or extend a mirrored volume.

RAID-5 volumes. Also known as striping with parity, provide the performance benefits of
striping in addition to fault tolerance. RAID-5 requires a minimum of three physical disks and a
maximum of 32. With data split across disks, RAID-5 volumes provide performance and
protection benefits. For example, when saving a file to a RAID-5 volume consisting of three
disks, the file is split across two of the disks, giving you the performance benefit of striped
volumes. On the third disk, parity information is stored. If one of the disks fails, the missing
data is reconstructed in real-time from the remaining part of the file and the parity information.
However, you can’t extend, mirror, or shrink RAID-5 volumes.
When you install Windows Server 2022, Windows creates both a system volume and a boot
volume on the disk you chose to install on. However, this terminology can be a little confusing
as it’s counterintuitive. It’s important to note that a:

System volume. Is the volume that the OS boots from. This is a small volume containing the
Windows boot loader (called Bootmgr) and boot configuration data (BCD). The boot loader
begins loading the OS and reads the BCD.

Boot volume. Is the volume that contains the rest of the OS files in a folder called Windows
and its subfolders. It also contains the Users folder, where user profiles are stored, and the
Program Files folder, where applications are installed.
One reason for having two volumes is to allow for encrypting the disk using a Windows feature
called BitLocker Drive Encryption. If you enable BitLocker, the boot volume is encrypted, but the
system volume isn’t. This allows the system to begin the boot process from the unencrypted
system volume.
Options for managing volumes
You can use multiple tools to manage volumes in Windows Server 2022. If you prefer graphical
tools, you can use Server Manager and Disk Management. Command-line options include the
diskpart.exe command and Windows PowerShell.
Server Manager
In Module 1, you learned that Server Manager is installed by default in Windows Server 2022. You
also learned that you could use it to manage disks and volumes on either local or remote servers.
90
20740 Installation, Storage, and Compute with Windows Server
To access volume management tools in Server Manager, in the navigation pane select File and
Storage Services, and then select Volumes. The details pane lists existing volumes. You can rightclick or access the context menu of a volume for a list of commands for that type of volume, such
as Extend volume, Delete volume, and Format. You can also scan the volume for file system errors.
To create a new volume, in the details pane, select the Tasks drop-down menu, and then select
New Volume. The New Volume Wizard then guides you through steps for creating the new volume,
starting with choosing the server and disk you want to work with. After that, you’ll need to:

Choose a volume size. The wizard lists the minimum size of the new volume and the available
capacity, which indicates the maximum size of the volume.

Choose a drive letter for the volume. You can also choose to mount the volume to a folder on
an existing volume. However, the folder must be on an NTFS-formatted volume and be empty.
For example, you could mount a new volume as a folder called myvolume on the C: volume.
You would then access it in Windows File Explorer as C:\myvolume, instead of using a drive
letter.

Choose a file system. You’ll need to select from FAT, NTFS, or ReFS. You’ll also select its
allocation unit size and give it a name.
On the last page of the wizard, you’ll be asked to confirm all your choices, as Figure 1 depicts:
Figure 6: The Confirm selections dialog box
91
20740 Installation, Storage, and Compute with Windows Server
Disk Management console
You can also use Disk Management, a dedicated tool for managing all aspects of disk and volume
management. The Disk Management console lists all disks that Windows finds.
To create a volume, right-click or access the context menu of an area of unallocated space in the
desired disk and choose the type of volume you want to create: simple, spanned, striped, mirrored,
or RAID-5, as Figure 2 depicts:
Figure 7: Creating a volume from unallocated space
When the wizard starts, you’ll need to determine:
1. A size for the volume (in megabytes).
2. Whether to mount the new volume using a drive letter or a folder.
3. The file system, allocation unit size, and name for the volume.
4. Whether to select the Perform quick format checkbox. If the box isn’t selected, Windows will
check each sector on the disk to verify that there are no read/write errors. On large volumes,
this can take a long time. If the checkbox is selected, the volume is created without checking
for errors.
You can use the Disk Management tool, similar to Server Manager, to manage disks on remote
servers. In Module 2, you learned how you can create a Microsoft Management Console (MMC)
and then add a snap-in. After you add the Disk Management snap-in, you’re prompted to choose
either the local computer or a remote computer to manage, as Figure 3 depicts:
92
20740 Installation, Storage, and Compute with Windows Server
Figure 8: Connecting an MMC to a remote computer
Diskpart.exe
You can also manage disks and volumes using the command-line tool Diskpart.exe. Diskpart is
useful in Server Core installations, but it’s also available in Windows Server 2022 Desktop
Experience installations.
You can also use Diskpart in scripts. For example, if you need to create the same set of volumes
on multiple servers, you simply create a script file with diskpart commands and then run the script
on each server.
To access Diskpart, at a command prompt, enter diskpart and then select Enter. When the prompt
changes to the DISKPART> prompt, you can enter commands. You can also enter help at the
prompt for a list of available commands.
To use Diskpart to manage a disk, you first must select the disk you want to work with. You can
use List Disk to list available disks, where each is assigned a number. Then you can use Select
disk x, where x is the number of the disk you want to work with. If you enter List Disk again, the
command output will have an asterisk next to the selected disk, as Figure 4 depicts:
93
20740 Installation, Storage, and Compute with Windows Server
Figure 9: Using Diskpart to list and select a disk
Similarly, if you want to work with an existing volume, use List Volume to obtain a list of volumes,
and then Select Volume to select the volume.
You can also use Diskpart to create and delete volumes. For example, if you want to create a
simple volume of 1,000 MB on disk 1, and assuming disk 1 is already selected and configured
as a dynamic disk, use the following sequence of commands:
List Disk
Select Disk 1
Create volume simple size=1000
In the previous example, if you’re working with a basic disk, you must substitute partition for
volume, and indicate what type of partition you want to create. If you want to create a primary
partition of 1,000 MB, use the following command:
Create partition primary size=1000
After the volume is created, you’ll need to install a file system on it using the format command.
For example, to format the selected volume with the NTFS file system, then perform a quick format
and assign the name Data to the volume, you would use the following command:
Format fs=NTFS quick label=Data
94
20740 Installation, Storage, and Compute with Windows Server
However, the volume won’t be usable until you assign it a drive letter or mount it to a folder. You
can use the assign command to do that. For example, to assign the drive letter M to the selected
volume, you would enter the following command:
Assign letter=m
You can also use Diskpart to convert a disk from basic to dynamic, dynamic to basic, MBR to GPT,
or GPT to MBR by using the Convert command, as in the following example:
Convert dynamic
Demonstration: Manage volumes
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
Extend and shrink a volume
When you extend volumes, how they can be extended is determined by the disk type on which the
volume resides. However, shrinking volumes is virtually the same for both disk types, as you make
them smaller by reclaiming free space from the volume. Both extending and shrinking volumes is
nondestructive, which means that all data on the volume remains intact.
Extend volumes
When you use dynamic disks, you can extend volumes by incorporating additional contiguous or
noncontiguous unallocated space. However, if you use basic disks, you can only extend volumes
onto contiguous space. If you try to extend the volume to noncontiguous space, you’ll be prompted
to convert the disk to a dynamic disk. You can only extend volumes if they’re formatted with NTFS
or ReFS. You can’t extend FAT volumes.
To extend a volume using the Disk Management console, simply right-click or access the context
menu of the existing volume, and then select Extend Volume. A wizard will ask you how much
space you want to add to the volume and from which disks.
95
20740 Installation, Storage, and Compute with Windows Server
Spanned volumes
If you extend a volume onto space on a different disk, the volume is a spanned volume. Creating
a spanned volume requires that all disks containing that volume are dynamic disks. Spanned
volumes are useful if all the space on a disk has been used up. You can simply install a new disk
and then extend an existing volume to that new disk. In Figure 5, volume G has been extended first
to noncontiguous space on disk 2, and then extended again to space on disk 3, thereby creating a
spanned volume:
Figure 10: Extending a volume to another disk to create a spanned volume
Note: Although spanned volumes are useful for utilizing space from multiple disks, they
aren’t fault tolerant. If you lose any of the disks making up the spanned volume, the entire
volume becomes inaccessible.
96
20740 Installation, Storage, and Compute with Windows Server
Shrink volumes
To reclaim unused space, you might want to shrink a volume. However, you can shrink only
volumes formatted with the NTFS file system. Shrinking works by removing the free space in the
volume starting at the end of the volume and working backward to the beginning of the volume.
However, two things can limit the amount of shrinking:

Files marked as immovable. Certain files are marked as immovable, such as the Windows
paging file. If that file is near the end of the volume, you won’t be able to shrink the volume
beyond these files. In some cases, you might be able to relocate those files to another volume
before attempting to shrink it.

Heavily fragmented volumes. If a volume is heavily fragmented, a fragment of a file my reside
near the end of the volume. To rearrange the file fragments to the beginning of the volume and
free up space at the end of the volume, you can run the Windows Defragment and Optimize
Drives app.
To shrink a volume in the Disk Management console, right-click or access the context menu of the
volume, and then select Shrink Volume.
What is RAID?
RAID utilizes multiple disks to improve performance and provide fault tolerance. You implement
RAID in hardware by installing a RAID controller in the server, or in software via the OS.
Hardware RAID is configured by the dedicated tool provided by the hardware vendor. After
hardware RAID is set up, it’s transparent to the Windows OS. Generally, hardware RAID will give
you better performance than software RAID, and additional features might be available, such as
hot swapping, where you can replace a failed disk without shutting down the system. However,
hardware RAID can be expensive.
Windows provides support for software RAID, which is configured through the OS. Software RAID
doesn’t require any specialized hardware and works with any type of disk or controller.
Implementing RAID in software, however, makes recovering from a disk failure a little more
difficult. Because RAID is implemented by the OS, a disk failure could prevent the OS from even
booting up. To help protect against that, you should implement RAID-1 (mirroring) to help protect
the disk on which the OS is installed. In a mirrored configuration, a copy of the boot and system
volumes are on two separate disks. If one of the disks fail, you can modify settings to boot from
the other disk.
97
20740 Installation, Storage, and Compute with Windows Server
RAID levels overview
Different RAID configurations are designated with a number. For example, there’s RAID-0, RAID-1,
RAID-5, RAID-6, RAID-10, and so on. Windows Server 2022 supports three RAID configuration
levels: RAID-0, RAID-1, and RAID-5. Table 2 compared RAID levels:
Table 5: RAID configuration levels
RAID level
Minimum
number of
disks
Maximum
number of
disks
Provides performance
benefit?
Provides fault
tolerance?
RAID-0 (striping)
2
32
Yes. Provides very good read
and write performance.
No
RAID-1 (mirroring)
2
2
No
Yes
RAID-5 (striping
with parity)
3
32
Yes. Provides excellent read
performance and good write
performance.
Yes
Create RAID volumes
You can create RAID volumes using the Disk Management console. However, remember that all
RAID configurations require the use of dynamic disks. If you try to create one of these volumes on
a basic disk, the wizard that creates volumes will prompt you to convert the basic disk to dynamic.

RAID-0. To create a RAID-0 volume in the Disk Management console, right-click or access the
context menu of an unallocated space in a disk, and then select New Striped Volume. The New
Striped Volume Wizard takes you through steps to create the new volume, including:
o
o
The disks to use.
The amount of space you want to allocate on each of the disks.
Note: The space you allocate must be equal on all disks.
o

o
A file system.
A drive letter.
RAID-1. To set up RAID-1 with an existing volume, right-click or access the context menu of the
volume, and then choose Add Mirror. A dialog box prompts you to specify the disk onto which
you want to mirror the volume. To create a RAID-1 volume from unallocated space, right-click
or access the context menu of the unallocated space on one disk and choose New Mirrored
Volume. In this case, because you’re creating a new volume, the New Mirrored Volume Wizard
prompts you to specify a size, a file system, and a drive letter to assign to the new volume.
98
20740 Installation, Storage, and Compute with Windows Server

RAID-5. To set up RAID-5, right-click or access the context menu of the unallocated space on
one disk and choose New RAID-5 Volume. The New RAID-5 Volume Wizard prompts you to
choose a minimum of three disks that will host the volume. Because you’re creating a volume,
it will prompt you to provide the size, file system, and drive letter for the new volume.
Lab 2: Manage disks and volumes in
Windows Server
Please refer to the online lab to supplement your learning experience with exercises.
Knowledge check
Check your knowledge by answering these questions:
1. Which type of disk is faster: SATA or EIDE?
2. Which file systems provide security by allowing you to configure permissions for every file and
folder?
3. True or False: If you extend a volume, it will delete all data on the volume.
4. Which type of disk is more resilient: MBR or GPT?
5. What is SSD, and what are its benefits?
Note: To find the answers, refer to the Knowledge check slides in the PowerPoint
presentation.
Learn more
For more information, refer to:

Microsoft Assessment and Planning Toolkit

Windows Server servicing channels
99
20740 Installation, Storage, and Compute with Windows Server
Module 3: Implement enterprise
storage solutions
Storage is a critical component in any network, and the challenges posed by provisioning storage
for enterprise networks shouldn’t be underestimated. Over the years, Microsoft has provided
various storage solutions that IT staff could implement to address their organizations’ storage
needs.
This module describes some of these storage solutions, including direct-attached storage (DAS),
network-attached storage (NAS), and storage area networks (SANs). It also explains the purpose of
Microsoft Internet Storage Name Service (iSNS) Server, data center bridging (DCB), and Multipath
I/O (MPIO). Additionally, this module compares Fibre Channel, Internet Small Computer System
Interface (iSCSI), and Fibre Channel over Ethernet (FCoE), and describes how to configure sharing
in Windows Server.
By completing this module, you’ll achieve the knowledge and skills to:

Describe DAS, NAS, and SANs.

Compare Fibre Channel, iSCSI, and FcoE.

Explain how to use iSNS, DCB, and MPIO.

Configure sharing in Windows Server.
Lesson 1: Overview of direct-attached
storage, network-attached storage, and
storage area networks
For organizations with relatively few file servers, using DAS is a viable option. However, for larger
organizations, and certainly for enterprise-level organizations, use if NAS and implementation of
SANs is very likely. In this lesson, you’ll learn about these fundamental storage decisions.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe DAS, NAS, and SANs.

Compare block-level storage and file-level storage.
What is DAS?
DAS, as the name suggests, directly attaches to a file server or is installed in a server computer’s
chassis. All servers contain at least some DAS, if only the disk that’s partitioned for startup and the
operating system (OS).
100
20740 Installation, Storage, and Compute with Windows Server
Note: These partitions are referred to as the System and Boot partitions, respectively.
There are several types of disks that you can attach locally, including Serial ATA (SATA), serial
attached SCSI (SAS), and solid-state drive (SSD). For small deployments, adding additional DAS to
servers isn’t uncommon, regardless of whether the storage is internal or external.
Note: It’s common to add external storage to file servers and connect using a universal
serial bus (USB).
Advantages of DAS
The main advantage of using DAS is simplicity. DAS connects to the server computer through a
host bus adapter (HBA) or USB, so there aren’t any complex network components to implement
and configure.
It’s also true that DAS is considerably less expensive to implement than network-connected
storage solutions, partly because DAS requires fewer components to implement. However, it’s also
because the skills required to implement DAS are widely understood and don’t require a specialist.
Typically, you simply attach a new disk, and then Windows Server recognizes that disk and prompts
you to configure partitioning and volumes on the disk.
Disadvantages of DAS
A significant disadvantage of DAS is that the server computer itself becomes a single point of
failure. If the server is unavailable, so is its attached storage.
Also, when adding additional disks, you must have physical access to the server. With servers
increasingly being managed remotely from datacenters, this physical access can be problematic.
What is NAS?
With NAS, you attach your storage to a dedicated and network-connected storage device. In other
words, the storage isn’t connected to a physical server computer but can be made available to one
or several server computers.
NAS supports two solutions:

A low-end network appliance that supports only NAS.

An enterprise network appliance that not only supports NAS, but which you also, or
alternatively, configure to support SAN.
A NAS appliance has a dedicated OS that’s tasked solely with providing access to the storage from
the network servers configured to use it.
101
20740 Installation, Storage, and Compute with Windows Server
Note: Windows Storage Server, a feature of Windows Server, is an example of NAS software.
After you physically deploy your NAS appliance and connect it to your network, you’ll need to
configure it and make the storage available. Typically, you must connect to the appliance across
the network, as these devices tend not to have screens or keyboards for local access. You’ll then
create the necessary shares to enable access to the storage that connects to the appliance.
Tip: NAS devices usually use file-level access over protocols such as Common Internet File
System (CIFS), Server Message Block (SMB), and Network File System (NFS).
Advantages of NAS
NAS is ideal when you require fast storage access at a relatively low cost. NAS offers performance
benefits that DAS doesn’t because the storage appliance manages file access on behalf of your
servers. The following list summarizes the advantages of using NAS versus DAS:

NAS is less expensive to implement than SAN solutions.

NAS typically supports larger capacities than DAS, so you can make more storage available.

NAS appliances support Redundant Array of Independent Disks (RAID) configurations to
provide for higher throughput and/or storage high availability.

NAS provides centralization of storage, which is beneficial because:
o
o
Files aren’t distributed across file servers, but rather in one place.
You can centrally manage your storage so there’s no need to connect to multiple file
servers to manage file shares.

NAS appliances can be accessed from almost any OS, so it’s possible to configure storage that
both Linux and Windows Server users can access at the same time.

NAS is typically available as a Plug and Play (PNP) solution, which makes it much easier to
deploy and provision than SAN solutions.
Disadvantages of NAS
NAS is an obvious improvement over DAS in most, if not all, respects. However, that’s not the
case when comparing it with SAN storage solutions, as there are some potential disadvantages,
including:

NAS relies heavily on network throughput, which isn’t a problem when you use NAS as a filesharing solution. However, for data-intensive apps, such as Microsoft SQL Server-based apps,
NAS isn’t an appropriate solution. Instead, you should consider using SAN.

NAS provides lower throughput than SAN. This is partly because of components used and how
they’re connected, but also because file-level access is used.
102
20740 Installation, Storage, and Compute with Windows Server
What’s a SAN?
SAN solutions are based on connectivity to a high-speed network. Components such as servers,
switches, and storage devices use this high-bandwidth connection to create a high-performance
storage solution. However, this high-speed connectivity comes at a cost, both monetary and in
terms of required technical skills for implementation.
In a typical SAN solution, your file servers can access a storage pool that’s facilitated by the SAN.
The storage pool can be constructed from storage connected almost anywhere in your network.
SANs use block-level access as opposed to file-level access, so instead of having to rely on fileaccess protocols, such as SMB and NFS, SANs write blocks of data directly to the disks. Typically,
SANs use FCo3 or iSCSI.
Note: Many SANs provide flexibility by also supporting NAS-type access.
Advantages of SAN
Perhaps the most obvious advantage of SANs compared to NAS is performance. With file-level
access, when a change is made to a file, even if that change is only a single data block in the file,
the entire file is updated. However, with SAN, only the modified block is updated. Additionally,
SANs offer several other advantages, including:

Provision of distributed storage as a centralized pool.

Storage that’s accessible from multiple operating systems.

Higher data throughput when it’s transferred from device to device because it doesn’t need to
pass through a server.

You can use SANs to build highly fault-tolerant storage solutions.
Disadvantages of SAN
There are two significant issues with using SANs. These are:

SANs are relatively complex and often require specialist skills to deploy, provision, and
maintain.

SANs are more expensive than both DAS and NAS.
Tip: You can implement SANs by using several technologies, the most common of which are
Fibre Channel and iSCSI. We’ll discuss both in the next lesson.
103
20740 Installation, Storage, and Compute with Windows Server
Comparison and scenarios for usage
It’s important that you understand these storage technologies thoroughly, particularly if you’re
selecting your organization’s storage solution. Table 1 summarizes each of these storage
technologies:
Table 6: Summary of storage solutions
Storage solution
Description
DAS
Is comparatively simple with low setup costs.
NAS
Is a complementary solution for both DAS and SAN.
Ubiquitous in most organizations, so well understood.
SAN
Provides the highest performance and is a feature-rich
solution.
When to use DAS
Based on its characteristics, you might be tempted to use DAS for single-server environments or
when you want to reduce costs. Both of these are sound considerations. However, you might also
consider using DAS to support departmental storage needs within a large organization, such as for
a database app that’s being used in a specific department. In this situation, the app management
team might prefer to have local control over their storage needs. DAS is ideally suited to this need.
When to use NAS
Often the difference between NAS and SAN is somewhat obscure, largely because many SAN
vendors provide file-level access and block-level access. As a result, organizations can provision
their storage using CIFS, SMB, and NFS. This means that in most large organizations, NAS shares
the same storage appliances, disk shelves, and network infrastructure as SAN.
When to use SAN
SAN provides the best storage solution for enterprise organizations, both in terms of performance
and reliability. However, it’s worth noting that improvements in disk technology means that for
some scenarios, DAS can offer similar throughput.
Given the blurring of the lines between these storage solutions, perhaps the most compelling
reason to choose SAN over any other storage solution is its expandability. Whereas DAS solutions
support capacities of up to, perhaps, hundreds of terabytes (TBs), SANs can provide thousands of
TBs of storage space.
104
20740 Installation, Storage, and Compute with Windows Server
To summarize, SANs offer:

Flexibility. SANs provide both NAS and SAN features enabling administrators to choose for a
given scenario.

Manageability. SANs usually offer a unified management interface.
Table 2 provides some context of when to use which solution in a given scenario:
Table 7: When to use which storage solution
Scenario
DAS
NAS
SAN
Transactional
databases
requiring high
performance
Low cost, but good
performance.
However, might
require additional
administration in large
enterprise
organizations.
Not suitable for
most database
applications.
An ideal choice for
transactional databases
because it offers very high
performance.
Virtual
machine (VM)
storage
Administrative
overhead is higher
than when using SANs
but provides good
performance.
A good option
when trying to
reduce costs and
keep things
simple.
Excellent performance
makes this the preferred
choice for this workload.
Branch-office
shared folders
Low cost and simple
deployment make this
the best choice for this
workload.
Relatively simple
deployment and
fairly low cost.
Often makes a
good choice for
this workload.
More expensive than other
solutions, and complexity
makes it a less
appropriate choice.
Tiered storage
for apps
Suitable for scenarios
that have a limited
implementation
budget.
Some solutions
are suitable,
especially those
that include ScaleOut File Server
(SOFS) with
Storage Spaces
and tiering.
The most flexible solution
for this workload. Includes
features such as built-in
tiering, caching, and other
performance benefits.
Microsoft
Exchange
database and
log storage
Low cost and very
good performance
make this a viable
choice for this
workload.
Not an appropriate
choice.
Excellent performance
makes this the preferred
choice for this workload,
where budget permits.
105
20740 Installation, Storage, and Compute with Windows Server
Block-level storage compared with file-level storage
When applications write to disks, the data is stored in files. These files comprise one or more
blocks of disk space.
Tip: You define block size when you format a volume on a disk.
Typically, when an application updates a file, the entire file is overwritten, and all blocks that
pertain to the updated file are overwritten. This process is known as file-level storage.
For greater efficiency, it’s typical when working with SANs (and some NAS appliances) to use
block-level storage. In this scenario, when an application updates a file, only changed blocks are
overwritten, while unchanged blocks aren’t updated. When working with files that span multiple
blocks in your storage, using block-level storage can significantly improve performance. Let’s
examine these two approaches in more detail.
File-level storage
When working with NAS, the CIFS and NFS protocols are usually implemented, although SMB is
also frequently used. NAS appliances rely on file-level storage, which has the following
characteristics:

Is often less expensive than block-level storage.

File sharing protocols (CIFS, SMB, and NFS) provide and manage access to file-level storage.

Is implemented on block-level storage.

Volumes using file-level storage are formatted with a file system.

Not all applications support file-level storage, although most do.
Block-level storage
As we’ve learned, block-level storage is implemented in SANs. Typically, you’ll use protocols such
as iSCSI, Fibre Channel, or FCoE. You usually create your volumes from chunks of block-level
storage. Within these volumes, you create logical unit numbers (LUNs), which are virtual storage
areas. After creating the LUNs, you can make them available to your servers, which identify the
LUNs as physical disks. A server administrator then creates a Windows Server volume on the
“physical disk” and formats it using NTFS or ReFS.
Block-level storage has the following characteristics:

Provides flexibility for your server administrators, enabling them to use the storage as a data
volume, an OS volume, or a volume to host shared folders.

Is independent of the server’s OS that’s accessing the storage.
106
20740 Installation, Storage, and Compute with Windows Server

Supports file server startup, meaning you can deploy diskless servers that reply on block-level
storage LUNs for the OS disks (System and Boot).

Provides direct access from VMs to support high-performance virtualized workloads.
Lesson 2: Compare Fibre Channel, iSCSI,
and Fibre Channel over Ethernet
This lesson describes Fibre Channel and factors to consider when implementing it, and introduces
a role service in Windows Server that enables organizations to create storage through the iSCSI
protocol.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe and compare Fibre Channel, iSCSI, and FCoE.

Describe core storage components.

Configure iSCSI.
What is Fibre Channel?
Fibre Channel has a high-performance architecture that provides connectivity between your
servers and a SAN. Fibre Channel is implemented over the Fibre Channel protocol, so in essence,
it’s implementing SCSI commands on a network. A Fibre Channel implementation consists of the
following components:

A SAN. This represents the storage backend and listens for computer requests.

A computer installed with an HBA card. This is the initiator, and it initiates requests when apps
require access to data that’s accessible on the SAN.

A Fibre Channel switch. Switches are used so that servers don’t need a direct connection to
the underlying storage on the SAN.
FCoE is a more recent implementation of Fibre Channel, and it provides excellent performance and
enables you to implement Fibre Channel over your existing network infrastructure. This makes it
less expensive. Using FCoE provides three advantages:

It’s easier to implement a single network technology.

You can use most of the generic network troubleshooting tools, so you don’t need to learn
about specific tools for storage.

You don’t usually need any specific training about the technology.
When deploying Fibre Channel, you can choose between three topologies:

Arbitrated loop. Fibre Channel uses a ring to connect storage devices and requires no switches.
This is seldom used these days as switches are much less expensive than they used to be.
107
20740 Installation, Storage, and Compute with Windows Server

Point-to-point. Fibre Channel hosts connect directly to a storage device, negating the need for
switches. Again, this is an unusual configuration nowadays as switches are inexpensive.

Switched fabric. This is the most common topology. In switched fabric environments, Fibre
Channel switches are used. All hosts connect to the deployed switches, and the switches
connect to the backend storage.
Considerations for implementing Fibre Channel
When you’re planning to implement Fibre Channel, you should consider four factors carefully.
These are:

Infrastructure requirements.

Storage-bandwidth requirements.

Connectivity reliability and security.

Asset and administrative costs.
Infrastructure
You’ll need quite an extensive list of components to implement Fibre Channel, including:

Fabric or network switches. If your network consists of fiber-optic cabling only, you’ll probably
use Fibre Channel switches. In networks comprising a mix of cabling types, any switches you
choose must be capable of supporting different traffic types over the various media. You’ll
normally dedicate these switches to the Fibre Channel network. This provides performance
and security benefits but does come at an increased cost.

HBAs. Any hosts that must connect to the storage on the FCoE network require an HBA, either
as a physical card or as a component enabled on the computer’s motherboard. Each host
needs at least one dedicated HBA, but ideally, you should have two to provide redundancy.

Cabling. The fundamental building block of any network, including Fibre Channel. You’ll need
fiber optic or Ethernet cabling, or both.
You can use Fibre Channel with several cable types, the most common of which are:
o
Single-mode fiber optic.
o
Ethernet:
o

o
Multi-mode fiber optic.
•
Fibre Channel over Ethernet.
•
Fibre Channel over IP.
Ethernet over Copper.
Storage controllers. These manage the communications with backend storage.
108
20740 Installation, Storage, and Compute with Windows Server
Bandwidth
A significant benefit of using Fibre Channel is the additional bandwidth and reliability it can
provide in your storage architecture. When choosing whether to implement Fibre Channel, this
performance improvement can be a major factor.
Reliability and security
Fibre Channel packets are received in a specific sequence, unlike with generic TCP-based
protocols and solutions. This sequencing improves throughput significantly. Additionally, you
typically implement Fibre Channel deployments on a dedicated network infrastructure, so this
separation from the rest of the corporate infrastructure means that security is improved because
the Fibre Channel infrastructure is less susceptible to network attacks.
Costs
Fibre Channel does require personnel with specialized skills to implement it, so costs can be
higher than with other storage solutions. Also, the additional infrastructure has a significant cost
attached to it.
What is iSCSI?
iSCSI is a standard that defines access to storage devices by using TCP/IP protocol. Windows
Server includes the iSCSI Target Server role that enables computers to access the storage
connected over the network. iSCSI implementation in Windows Server has many usage scenarios,
such as network and diskless boot, server-application storage, heterogeneous storage, and as
an environment for development and testing.
iSCSI is a protocol that supports access to SCSI-based storage devices over a TCP/IP network.
iSCSI carries standard SCSI commands over IP networks to facilitate data transfers and manage
storage over a network. You can use iSCSI to transmit data on any network that works with TCP/IP
protocol, including local area networks (LANs), WANs, an intranet, and the internet.
iSCSI relies on the standard Ethernet networking infrastructure, where a separate and dedicated
network infrastructure is optional, and it typically connects on TCP port 3260. iSCSI protocol
enables two hosts to negotiate parameters such as session establishment, flow control, and
packet size, and then exchange SCSI commands by using an existing Ethernet network. By doing
this, iSCSI takes a popular and high-performing local storage-bus subsystem architecture and
emulates it over networks, thereby creating a SAN.
However, unlike some SAN solutions that connect to servers using special hardware, such as fiberoptic adapters and cables, iSCSI requires no specialized cabling and can be run over existing
switching and IP infrastructure. You should evaluate your organization’s network performance and,
if needed, deploy iSCSI storage on a dedicated network.
109
20740 Installation, Storage, and Compute with Windows Server
An iSCSI deployment includes the following components:

IP network. You can use standard network interface adapters and standard Ethernet protocol
network switches to connect the servers to the storage device. To provide sufficient
performance, the network should provide speeds of at least 1 gigabit per second (Gbps) and
should provide multiple paths to the iSCSI target for high availability.

iSCSI targets. iSCSI targets present or advertise storage, similar to controllers for hard-disk
drives of locally attached storage. However, servers access this storage over a network, rather
than locally. Many storage vendors implement hardware-level iSCSI targets as part of their
storage device’s hardware. Windows Server includes the iSCSI Target Server role service that
you can install as a component of the File and Storage Services role, and which enables
organizations to make storage available via the iSCSI protocol.

iSCSI initiators. The iSCSI target displays storage to the iSCSI initiator (also known as the
client). The iSCSI initiator service enables computers to connect to the iSCSI target’s disk
storage over the network by using the iSCSI protocol. Windows Server and client operating
systems include the iSCSI initiator service.

iSCSI Qualified Name (IQN). IQNs are unique identifiers that iSCSI uses to address initiators
and targets on an iSCSI network. When you configure an iSCSI target, you must configure the
IQN for the iSCSI initiators that’ll be connecting to the target. iSCSI initiators also use IQNs to
connect to the iSCSI targets. However, if name resolution on the iSCSI network is an issue, you
can always identify iSCSI endpoints (both target and initiator) by their IP addresses.
iSCSI components
The main components of iSCSI protocol include the iSCSI Target Server, iSCSI initiator, and the
Internet Storage Name Service (iSNS).
iSCSI Target Server
The iSCSI Target Server role service provides for a software-based iSCSI disk solution. You can use
the iSCSI Target Server to create both iSCSI targets and iSCSI virtual disks, and then use Server
Manager to manage these iSCSI targets and virtual disks. In Windows Server, the iSCSI Target
Server is available as a role service under the File and Storage Services role in Server Manager.
The features of the iSCSI Target Server in Windows Server include:

Challenge Handshake Authentication Protocol (CHAP). Enable CHAP to authenticate initiator
connections or enable reverse CHAP to enable the initiator to authenticate the iSCSI target.

Query initiator computer for ID. Enables you to select an available Initiator ID from the list of
cached IDs on the target server. It’s important to note that you must use a supported version
of the Windows or Windows Server systems to utilize this feature.

Virtual hard-disk support. You create iSCSI virtual disks as VHDs. Windows Server supports
both VHD and VHDX files, and the latter supports up to 64 TBs of capacity. You create new
iSCSI virtual disks as VHDX files, but you also can import VHD files.
110
20740 Installation, Storage, and Compute with Windows Server
The maximum number of iSCSI targets per target server is 256, and the maximum number of VHDs
per target server is 512, and you can manage the iSCSI Target Server by using Server Manager or
Windows PowerShell.
Windows Server manages an iSCSI Target Server in hosted and private clouds by using the Storage
Management Initiative Specification provider with Microsoft System Center Virtual Machine
Manager.
If you want to manage iSCSI Target Server with Windows PowerShell, you can use several
commands, including:
Install-WindowsFeature FS-iSCSITarget-Server
New-IscsiVirtualDisk E:\iSCSIVirtualHardDisk\1.vhdx –size 100GB
New-IscsiServerTarget SQLTarget –InitiatorIds "IQN: iqn.199105.com.Microsoft:SQL1.Contoso.com”
Add-IscsiVirtualDiskTargetMapping SQLTarget
E:\iSCSIVirtualHardDisk\1.vhdx
iSCSI initiator
The iSCSI initiator is installed by default in all supported Windows OS versions. To connect your
computer to an iSCSI target, you simply need to start the service and configure it.
The following Windows PowerShell cmdlets are examples of how you’d manage an iSCSI initiator:

Start-Service msiscsi

Set-Service msiscsi –StartupType "Automatic"

New-IscsiTargetPortal –TargetPortalAddress iSCSIServer1

Connect-IscsiTarget –NodeAddress "iqn.1991-05.com.microsoft:netboot-1-SQLTarget-target"
Considerations for implementing iSCSI
Before implementing a storage solution based on iSCSI protocol, you should carefully consider
several factors, including performance, data capacity, and security, and which servers will be using
the iSCSI storage and what type of physical disks you’ll deploy. The main considerations during the
planning process include:

Network speed and performance. Network speed should be at least 1 Gbps. However, in many
cases, you can deploy iSCSI networks in a datacenter with bandwidths of 10 Gbps, 40 Gbps, or
even 100 Gbps.

High availability. The network infrastructure must be highly available because data is sent from
the servers to the iSCSI storage over network devices and components. If network components
fail, data traffic must be rerouted to another network path without disrupting user experience
and network performance.
111
20740 Installation, Storage, and Compute with Windows Server

Security. iSCSI storage transfers data over a network, so organizations must ensure that the
network is protected from different types of intrusions. In organizations where you must deploy
a higher level of security, you can implement a dedicated network for iSCSI traffic, which
includes iSCSI authentication.

Vendor information. Read the vendor-specific recommendations to review the different types of
deployments and applications that use iSCSI storage, such as Microsoft Exchange Server and
Microsoft SQL Server.

Infrastructure staff. IT personnel who design, configure, and administer iSCSI storage must
include IT administrators with different areas of specialization, such as Windows Server,
network, storage, and security administrators. This helps you design an iSCSI storage solution
with optimal performance and security and create consistent procedures for management and
operations.

Application teams. The design team for an iSCSI storage solution should include applicationspecific administrators, such as Exchange Server and SQL Server administrators, so you can
implement the optimal configuration for the specific technology or solution.
You also should investigate competitive solutions to iSCSI and determine if they better meet your
business requirements. Other alternatives to iSCSI include Fibre Channel, FCoE, and InfiniBand.
iSCSI usage scenarios
You can implement an iSCSI storage solution in multiple scenarios, depending on your
organization’s business requirements. The most common iSCSI usage scenarios include:

Network and diskless boot. Allows organizations to deploy diskless servers by using either
boot-capable network adapters or a software loader. If an application supports VHDs,
organizations can save up to 90 percent of the storage space used for OS images. For
example, you can use differential virtual disks in large deployments of identical OS images,
such as on Hyper-V VMs or high-performance computing (HPC) clusters.

Server application storage. Certain applications require block-storage use. The iSCSI Target
Server can provide these applications with continuously available block storage. Because the
storage is accessible remotely, it can also combine block storage for central and branch-office
locations.

Heterogeneous storage. iSCSI Target Server supports iSCSI initiators that aren’t based on
Windows operating systems, so you can share storage on servers that are running Windows
in mixed environments.

Lab environments. The iSCSI Target Server role enables your Windows Server computer to be
a network-accessible block-storage device, which is useful when you want to test applications
prior to deploying them on SAN storage.
In addition to configuring the default (iSCSI) Target Server and iSCSI initiator settings, you can
integrate these services into more advanced configurations. For example, most organizations will
need to deploy high availability of the network infrastructure that connects to the iSCSI storage
solution.
112
20740 Installation, Storage, and Compute with Windows Server
Demonstration: Configure an iSCSI target
For high-level demonstration steps, refer to the Notes, which are in the accompanying Microsoft
PowerPoint presentation.
Lesson 3: Understanding iSNS, data center
bridging, and MPIO
Large organizations often need advanced storage features, which can help simplify storage
management. It’s important you understand the components of these advanced storage features.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe iSNS, data-center bridging, and MPIO.

Configure MPIO.
What is iSNS?
iSCSI initiator and the iSNS is a protocol used when the iSCSI initiator attempts to discover iSCSI
targets. The iSNS Server service feature in Windows Server provides storage discovery and
management services to a standard IP network and facilitates the integration of IP networks and
manages iSCSI devices.
The iSNS server includes the following functionalities:

Contains a repository of active iSCSI nodes.

Contains iSCSI nodes that can be initiators, targets, or management nodes.

Allows initiators and targets to register with the iSNS server. Initiators then query the iSNS
server for the list of available targets.

Contains a dynamic database of the iSCSI nodes. The database provides the iSCSI initiators
with iSCSI target discovery functionality. The database updates automatically using the
Registration Period and Entity Status Inquiry features of iSNS. Registration Period allows iSNS
to delete stale entries from the database. Entity Status Inquiry is similar to the ping command.
It allows iSNS to determine whether registered nodes are still present on the network, and it
enables iSNS to delete database entries that aren’t active.

Provides State Change Notification Service. Registered clients receive notifications when
changes occur to the database in the iSNS server. Clients keep their information about the
iSCSI devices available on the network up to date with these notifications.

Provides Discovery Domain Service. You can divide iSCSI nodes into one or more groups
called discovery domains, which provide zoning so that an iSCSI initiator can only refer, and
connect, to iSCSI targets in the same discovery domain.
113
20740 Installation, Storage, and Compute with Windows Server
What is Data Center Bridging?
Data Center Bridging is a collection of standards-based networking technologies defined by The
Institute of Electrical and Electronics Engineers Inc. (IEEE) that enables support for the coexistence
of LAN-based and SAN-based applications over the same networking fabric within a datacenter.
Data Center Bridging uses hardware-based bandwidth allocation and priority-based flow control
instead of the OS having to manage traffic itself.
Windows Server supports Data Center Bridging by installing the Data Center Bridging feature.
The advantage of using Data Center Bridging is that it can run all Ethernet traffic, including traffic
going to and from your Fibre Channel or iSCSI SANs. This saves on datacenter cabling, network
equipment, server space, and power. Data Center Bridging is also referred to as Data Center
Ethernet, Converged Enhanced Ethernet, or converged networking.
Data Center Bridging requires compatible network adapters and switches, and you can manage it
using Windows PowerShell. When you install the Data Center Bridging feature, you can use the
following three Windows PowerShell cmdlets: netqos, dcbqos, and netadapter.
What is MPIO?
Features that enable organizations to deploy redundant network connections include Multiple
Connected Session (MCS) and MPIO. Although similar in the results that they achieve, these two
technologies use different approaches to attain high availability for iSCSI storage connections.
MCS provides the following functionalities:

Enables multiple TCP/IP connections from the initiator to the target for the same iSCSI session.

Supports automatic failover. If a failure occurs, all outstanding iSCSI commands are
automatically reassigned to another connection.

Requires explicit support by iSCSI SAN devices.
MPIO provides the following functionalities:

When multiple network interface cards and multiple network cables are installed on computers
with iSCSI initiator and iSCSI target server roles, you can use MPIO to provide failover
redundancy during network outages.

MPIO requires a device-specific module (DSM) for when you want to connect to a third-party
SAN device that’s connected to the iSCSI initiator. The Windows OS includes a default MPIO
DSM that’s installed as the MPIO feature within Server Manager.

MPIO is widely supported. Many SANs can use the default DSM without any additional
software, while others require a specialized DSM from the manufacturer.

MPIO is more complex to configure, and it isn’t as fully automated during failover as MCS.
114
20740 Installation, Storage, and Compute with Windows Server
Demonstration: Configure MPIO
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
Lesson 4: Configure sharing in Windows
Server
Accessing data on the remote server over a network is a common operation in the Windows
environment. To grant users access to remote resources, you must first create a share. By default,
Windows computers access shares by using SMB protocol. For example, if you need to access
resources on a Windows Server computer from Linux or Mac, you can install support for sharing
and accessing NFS shares.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe and configure SMB and SMB shares.

Describe and configure NFS and NFS shares.
What is SMB?
If you want to access data over a network, computers must connect to the network and use a
common network protocol. Most devices, including Windows Server, use TCP/IP protocol. Server
Message Block (SMB) is a client/server protocol that can run on top of TCP/IP and enables you
to communicate with remote computers, and then access and work with their resources, such as
shares and files. SMB was developed in 1983, and Microsoft included it as part of the Network
Basic Input/Output System (NetBIOS). Since then, SMB has improved, and has become more
secure, especially since version 3.0, which was included in Windows Server 2012. SMB is
constantly developing and improving. Windows Server 2022 supports SMB 3.1.1 and older
versions of SMB protocol to version 2.0. SMB 1.0 isn’t installed in Windows Server 2022 by
default.
Windows uses SMB protocol for accessing resources, such as shares, files, and printers over a
network on a remote computer. SMB protocol defines a series of steps, such as establishing
NetBIOS session, negotiating SMB protocol dialect, and connecting to a share that must occur
between the client and server before you can access the data on the remote computer. Files that
are stored on the remote computer and accessed over SMB protocol for applications seem as if
they are stored locally. For example, Microsoft Hyper-V can run VMs from an SMB share.
115
20740 Installation, Storage, and Compute with Windows Server
SMB protocol features
SMB protocol in Windows Server 2022 includes the following features:

SMB Transparent Failover. This feature enables you to perform the hardware or software
maintenance of nodes in a clustered file server without interrupting server applications that
are storing data on file shares.

SMB Scale-Out. By using Cluster Shared Volumes, you can create file shares that provide
simultaneous access to data files, with direct input/output (I/O), through all the nodes in a
file server cluster. The Scale-Out File Server cluster role uses this feature.

Cluster Dialect Fencing. Cluster Dialect Fencing provides support for cluster rolling upgrades
for the Scale-Out File Server feature.

SMB Multichannel. This feature enables you to aggregate network bandwidth and network fault
tolerance if multiple paths are available between the SMB client and server.

SMB Direct. This feature supports network adapters that have the Remote Direct Memory
Access (RDMA) capability and can perform at full speed with very low data latency and by using
little CPU processing time.

SMB Encryption. This feature provides the end-to-end encryption of SMB data on untrusted
networks and helps to protect data from eavesdropping.

SMB compression. SMB protocol automatically compresses files as they’re transferred over the
network. Because of this, you no longer need to manually compress (zip) files to minimize
network traffic and transfer files faster on slower or more congested networks.

Volume Shadow Copy Service (VSS) for SMB file shares. To take advantage of VSS for SMB file
shares, both the SMB client and the SMB server must support at least SMB 3.0.

SMB Directory Leasing. This feature improves branch office application response times. It also
reduces the number of round trips from the client to the server as metadata is retrieved from a
longer-living directory cache.

SMB over QUIC. This feature is included only in Azure Edition of Windows Server 2022 and
updates the SMB 3.1.1 protocol. It enables supported Windows clients to use the QUIC
protocol instead of Transmission Control Protocol (TCP) to securely and reliably access data on
file servers running in Azure.

Windows PowerShell commands for managing SMB. You can manage file shares on the file
server, end to end, from the command line.
SMB versions
SMB protocol is backward-compatible. Windows Server 2022 supports SMB 3.1.1 and tries to
negotiate this version when communicating to a client. But if the client doesn’t support SMB 3.1.1,
Windows Server tries to negotiate use of an older version, to SMB 2.0. The client can be another
Windows Server, a Windows 11 or even an older legacy client, or a network-attached storage (NAS)
device. In Windows Server 2022, support for SMB 1.0 is disabled by default.
116
20740 Installation, Storage, and Compute with Windows Server
Table 3 lists the highest SMB version supported in Microsoft operating systems:
Table 8: Supported SMB versions
Operating system
SMB version
Windows Server 2022, Windows Server 2019, Windows
Server 2016, Windows 11, and Windows 10
SMB 3.1.1
Windows Server 2012 R2, Windows 8.1
SMB 3.0.2
Windows Server 2012, Windows 8
SMB 3.0
Windows Server 2008 R2, Windows 7
SMB 2.1
How to configure SMB shares
When you share a folder, you make its content available on the network. You can limit who can
access the shared folder and share permissions. Additionally, you can limit the number of users
who can access the share at the same time and specify if an offline copy of the files that users
open will be created automatically on their computer.
Shared folders maintain a separate set of permissions from the file system permissions, which
means that you can set share permissions even if you share a folder on the FAT file system.
The same share permissions apply to all shared content. This behavior is different from file
system permissions, where you can set permissions for each file individually. You can use these
permissions to configure an additional level of security for files and folders that you make
available on the network. You can share the same folder multiple times, by using a different share
name and other share settings for each share.
In Windows Server, you can create and configure file shares with several different tools, such as
File Explorer, Server Manager, Windows Admin Center, or Windows PowerShell.
Note: The terms file share and SMB share refer to the same resource.
SMB share profiles
When you use Server Manager to create a new SMB share, you can select one of the following
SMB share profiles:

Quick. This profile doesn’t have any prerequisites and asks only for basic configuration
parameters when sharing a folder. After selecting the share location and share name, you
can configure share settings, such as access-based enumeration, share caching, encrypted
data access, and permissions.
117
20740 Installation, Storage, and Compute with Windows Server

Advanced. This profile requires the File Server Resource Manager role service installed. It
provides the same configuration options as the Quick profile, in addition to more options
such as folder owners, default data classification, and quotas.

Applications. This profile doesn’t have any prerequisites. The profile creates SMB share with
settings appropriate for Hyper-V, databases, and other server applications. Unlike the Quick
and Advanced profiles, you can't configure access-based enumeration, share caching, default
data classification, or quotas when you're using an Applications profile for creating a share.
Figure 1 depicts these file share profiles:
Figure 11: File share profiles in Server Manager
Table 4 lists configuration options that are available with each SMB share profile:
Table 9: SMB configuration options
Profile
Permissions AccessShare
Encrypted Data
Quotas
based
caching data
classification
enumeration
access
Quick
Yes
Yes
Yes
Yes
No
No
Advanced
Yes
Yes
Yes
Yes
Yes
Yes
Applications
Yes
No
No
Yes
No
No
SMB share properties
As Figure 2 depicts, when you use Advanced sharing in File Explorer to create a new SMB share,
you can configure the following basic properties:

Share name. Each share must have a share name that’s unique for Windows Server. The share
name can be any string that doesn’t contain special characters and is part of the Universal
Naming Convention (UNC) path that Windows users use when connecting to a share. You can
share the same folder multiple times and with different properties, but each share name must
be unique.
118
20740 Installation, Storage, and Compute with Windows Server
Note: If the share name ends with a dollar sign ($), the share is hidden and not noticeable
on the network. However, you can connect to it if you know the share name and have
appropriate permissions.

Number of simultaneous users. This limits the number of users that can have an open
connection to the share. The connection to the share is open when a user accesses the share
for the first time, and it closes automatically after a period of inactivity.

Caching/offline settings. You can control if the files in the share are cached and available
offline on the user’s computer. You can configure files to:

o
Cache on the client computer automatically when a user has network connectivity and
opens them for the first time.
o
Cache offline, only if the user manually configures this and has the necessary permissions.
o
Not cache at all.
Permissions. You can configure share permissions, which Windows uses in conjunction with
the file system permissions when accessing data over a network. Share permissions can allow
Read, Change, or Full control permissions.
Figure 12: File share properties in File Explorer
Note: If you try to use a share name that’s already in use on the computer, Windows Server
provides you with an option to stop sharing an old folder and use the share name for sharing
the current folder. If you rename a folder that’s currently shared, you don’t receive a
warning. However, the folder is no longer shared.
119
20740 Installation, Storage, and Compute with Windows Server
SMB share permissions
When you share a folder, you must configure the permissions that a user or group will have when
they connect to the SMB share. This is called sharing permissions, and there are three options:

Read. Users can review content, but they can’t modify or delete it.

Change. Users can review, modify, delete, and create content, but they can’t modify
permissions. Change permission includes Read permission.

Full Control. Users can perform all actions, including modifying the permissions. Full Control
includes Change and Read permissions.
If you use basic sharing, by right-clicking or accessing the context menu of the folder in File
Explorer and selecting the Give access to option, permissions are simplified and can have the
following options:

Read. This option allows users to observe but not modify content.

Read/Write. This option is the equivalent of Full Control permissions. Users can open, modify,
or delete a file, and modify permissions.
Note: Share permissions apply only to users when accessing a folder over the network. They
don’t apply to users who access a folder locally on the computer that stores the folder.
Configure SMB shares by using PowerShell
Windows PowerShell includes the SmbShare module, which contains 44 cmdlets for managing
SMB shares. This includes commonly used cmdlets, such as New-SmbShare, Set-SmbShare, and
Remove-SmbShare. If you use the SmbShare cmdlets, you can configure any share properties,
even those that aren’t available in File Explorer and Server Manager.
The following example depicts how to create an SMB share by sharing D:\Folder1 as a share
named Share1 by using PowerShell:
New-SmbShare –Name Share1 –Path D:\Folder1
You can review all SMB share properties for the share name Share1 by using the following cmdlet:
Get-SmbShare –Name Share1 | Format-List –Property *
You can review the list of all cmdlets for managing SMB shares by running the following cmdlet:
Get-Command –Module SmbShare
120
20740 Installation, Storage, and Compute with Windows Server
Demonstration: Configure SMB shares by using Server
Manager and Windows PowerShell
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
What is NFS?
Windows Server uses SMB protocol for sharing and accessing shared resources by default. But
SMB isn’t the only protocol that provides access to network resources. NFS is a distributed file
system protocol that provides similar functionality, mostly used by UNIX systems. It’s based on
open standards and allows access to a file system over a network. NFS was introduced by Sun
Microsystems in 1984 and has been developing actively since then—its current version is 4.2
(August 2022).
Note: NFS provides access to exports. Exports are similar to SMB shares in Windows; they
are shared UNIX file system paths.
Windows Server doesn’t include support for NFS by default. If you want to use the NFS protocol for
transferring files between Windows and non-Windows systems, such as Linux or UNIX, you need to
install support for NFS in Windows Server. As Figure 3 depicts, the two components for NFS
support in Windows are:

Client for NFS. This feature enables a Windows Server to access NFS exports on an NFS server,
regardless of the OS on which NFS server is running. Client for NFS supports NFSv2 and
NFSv3.

Server for NFS. This role service enables a Window Server to share folders and make them
available over NFS. Any compatible NFS client can access the shares, regardless of the OS
on which the NFS client is running. Server for NFS supports NFSv2, NFSv3, and NFSv4.1.
Figure 13: Adding the Server for NFS role service and Client for NFS feature in Server Manager
121
20740 Installation, Storage, and Compute with Windows Server
NFS components in Windows Server have been improved and expanded in every Windows Server
version. Server for NFS includes support for Kerberos protocol version 5 (v5) authentication.
Kerberos protocol v5 authentication provides authentication before granting access to data. It
also uses checksums to prevent data tampering. Server for NFS supports NFS version 4.1, which
includes improved performance with the default configuration, native Windows PowerShell
support, and faster failovers in clustered deployments.
NFS sharing usage scenarios
You can use NFS in Windows Server in many scenarios. Common scenarios include:

VMware VM storage. In this scenario, VMware hosts VMs on NFS exports. You can use Server
for NFS to host the data on a Windows Server computer.

Multiple OS environments. In this scenario, your company uses a variety of operating systems,
including Windows, Linux, and Mac. The Windows file-server system can use Server for NFS
and the built-in Windows sharing capabilities to ensure that all the operating systems can
access shared data.

Merge or acquisition. In this scenario, two companies are merging. Each company has a
different file server infrastructure. Users in one company use NFS, while the other uses SMB
to access shared data. By implementing Client for NFS and Server for NFS, users can access
data on file servers in both companies.
How to configure NFS shares
Before you can create an NFS share on Windows Server, you must install the Server for NFS role
service. After you install the role service, you can use Server Manager or PowerShell to create the
NFS share. With Server Manager, you can create NFS shares similarly to creating SMB shares by
using NFS Share - Quick or NFS Share – Advanced profiles. While the Quick profile doesn’t have
additional prerequisites, the Advanced profile requires the File Server Resource Manager role
service to be installed.
Windows PowerShell includes the NFS module, which contains 42 cmdlets for managing NFS
shares. This includes cmdlets such as New-NfsShare, Set-NfsShare, and Remove-NfShare.
You can review a list of all cmdlets for managing NFS shares by running the following cmdlet:
Get-Command –Module NFS
Demonstration: Configure an NFS share by using Server
Manager
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
122
20740 Installation, Storage, and Compute with Windows Server
Lab 3: Plan and configure storage
technologies and components
Please refer to our online lab to supplement your learning experience with exercises.
Knowledge check
Check your knowledge by answering these questions:
1. Name the client and the server component of the iSCSI protocol.
2. Name two usage scenarios for iSCSI storage.
3. Which Windows Server role service is required to enable you to create SMB shares with the
advanced profile?
4. What must you install if you want to create NFS share on Windows Server?
Note: To find the answers, refer to the Knowledge check slides in the accompanying
PowerPoint presentation.
123
20740 Installation, Storage, and Compute with Windows Server
Module 4: Implement Storage
Spaces and Data Deduplication
Windows Server provides several storage technologies, including Storage Spaces and Data
Deduplication, and it’s important that you understand how to implement and manage these
technologies.
After completing this module, you’ll have the knowledge and skills to:

Describe and implement the Storage Spaces feature in the context of enterprise
storage needs.

Manage and maintain Storage Spaces.

Describe and implement Data Deduplication.
Lesson 1: Implement Storage Spaces
Organizations use different storage solutions depending on their business requirements. Some
organizations use physical disks that are attached directly to a server, while other organizations
implement storage area networks (SANs). However, in many scenarios, SANs require special
configurations and hardware, and they can be very expensive for small and medium organizations
to implement. Windows Server includes the Storage Spaces feature, which provides similar
functionalities as hardware-based storage solutions. Storage Spaces is a feature in Windows
Server that pools storage space from different disks and presents it to the operating system (OS)
as a single disk.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe storage needs in enterprise

Describe Storage Spaces, its components, features, and usage scenarios
Enterprise storage needs
A challenge faced by most IT departments in large organizations is storage provisioning. Although
individual hard disks are relatively inexpensive, the growth of storage-heavy apps means individual
disks aren’t a viable option because of the large quantity that you’d need to use and manage.
IT departments are seeking storage solutions that can address this growing demand, while
managing to keep control of budgetary demands. For instance, by selecting hard disk drive (HDD)
technology over the more expensive Solid State Drive (SSD) technology, you compromise
performance for price. For some applications, that’s a valid decision. However, there will be apps
that require high performance.
124
20740 Installation, Storage, and Compute with Windows Server
In addition to performance characteristics, you should consider capacity. Firstly, how much can
you grow your available storage, and secondly, how quickly? Additionally, you must consider how
easy it is to add capacity to your storage. For example, if you’ve created a volume comprising four
physical disks, and you need to double the capacity, is it possible to add additional disk drives into
the same volume, or must you reconfigure the storage to accommodate the increased capacity?
When considering your organization’s storage solution, remember to plan for high availability.
There will be some apps that provide line-of-business (LOB) functionality that can’t be offline.
In these situations, you should consider disk-fault tolerance, which is the ability to continue
operating despite the loss of one or even several disks that comprise an app’s underlying storage.
There are numerous options available, each of which provides various levels of fault tolerance
and performance. These include mirroring, striping with parity, and storage-replication solutions.
A final consideration is security. In addition to the fundamentals of assigning rights and
permissions to your disk volumes, you should consider encryption. It’s important to protect
your data while it’s at rest, and disk encryption provides this capability. Some operating systems,
including Windows Server, provide a built-in capability, but this doesn’t always extend beyond
direct-attached storage (DAS). Ensure that, should security of this type prove important, you
select a storage solution that provides an encryption capability.
What is the Storage Spaces feature?
Storage Spaces is a storage-virtualization functionality that groups three or more physical disks
into a storage pool, and it’s built into Windows Server and client operating systems.
The Storage Spaces feature consists of two components:

Storage pools. A collection of physical disks aggregated into a logical disk so that you can
manage multiple physical disks as a single disk. You can use Storage Spaces to add physical
disks of any type and size to a storage pool.

Storage Spaces. Virtual disks created from free space in a storage pool. By implementing
Storage Spaces, many functionalities are added to the storage that’ll be available to users and
applications, such as resiliency level, storage tiers, fixed provisioning, and management tools
for administering storage. Administrators won’t have to manage single disks but will manage
disk space as a unit. Virtual disks are the equivalent of a logical unit number (LUN) on a SAN.
Manage Storage Spaces using the File and Storage Services role in Server Manager, Windows
PowerShell, and Windows Storage Management application programming interface (API) in
Windows Management Instrumentation (WMI).
System requirements for implementing Storage Spaces include:

Physical disks. Are disks such as SATA or serial-attached SCSI (SAS) disks. If you want to add
physical disks to a storage pool, they must satisfy the following requirements:
o
One physical disk is required to create a storage pool, and a minimum of two physical disks
are required to create a resilient mirror virtual disk.
125
20740 Installation, Storage, and Compute with Windows Server
o
A minimum of three physical disks are required to create a virtual disk with resiliency
through parity.
o
Three-way mirroring requires at least five physical disks.
o
You can attach disks by using a variety of bus interfaces, including Small Computer System
Interface (SCSI), SAS, Serial ATA (SATA), and Node Version Manager (NVM) Express.
o
Disks must be blank and unformatted. No volume can exist on the disks.
If you want to use failover clustering with storage pools, you can’t use SATA, universal
serial bus (USB), or SCSI disks.

Storage pool. A collection of one or more physical disks that you can use to create virtual disks.
You can add available, unformatted physical disks to a storage pool. It’s important to note that
you can attach a physical disk to only one storage pool, but that pool can include several
physical disks.

Virtual disk (or storage space). Virtual disks appear as physical disks to the OS, users, and
applications. However, virtual disks are more flexible because they include both thick and thin
provisioning, and just-in-time (JIT) allocations. They include resiliency to physical-disk failures
with built-in functionality, such as mirroring and parity. Virtual disks resemble Redundant Array
of Independent Disks (RAID) technologies, but Storage Spaces store the data differently.

Disk drive. Make your disk drives available from your Windows OS by using a drive letter.
After the virtual disk for the storage space is created, you must format it with New Technology File
System (NTFS) or Resilient File System (ReFS), depending on application requirements.
Additional features in Windows Server Storage Spaces include:

Tiered Storage Spaces. Allows you to use a combination of disks in a storage space. For
example, you could use very fast but small-capacity hard disks such as SSDs with slower,
but large-capacity hard disks. When you use this combination of disks, Storage Spaces
automatically moves data that’s accessed frequently to faster hard disks, and then moves
data that’s accessed less often to the slower disks.
By default, the Storage Spaces feature moves data once daily at 01:00 AM. You can also
configure where files are stored. The advantage is that if you have files that are accessed
frequently, you can pin them to the faster disk. The goal of tiering is to balance capacity with
performance. Windows Server recognizes only two levels of disk tiers: SSD and non-SSD.

Write-back caching. Optimizes writing data to disks in a storage space. Write-back caching
typically works with Tiered Storage Spaces. If the server that’s running the storage space
detects a peak in disk-writing activity, it automatically starts writing data to the faster disks. By
default, write-back caching is enabled. However, it’s also limited to 1 gigabyte (GB) of data.

Windows Server includes support for persistent memory (PMem). Use PMem as a cache to
accelerate the active working set or as capacity to guarantee consistent low latency on the
order of microseconds.
126
20740 Installation, Storage, and Compute with Windows Server
Components and features of Storage Spaces
The Storage Spaces technology has multiple components and features that administrators can use
to create, configure, and manage the storage solution, including:

Storage layout. Specifies how many storage-pool disks are allocated and how. The options are:
o
Simple space. A simple space layout includes disks configured with striping functionality.
However, data stored on disks doesn’t have redundancy because striping configuration
distributes logically sequential data segments across several physical disks, and if some of
the physical disks fail, the data’s logical structure is no longer available. In this case, you
must replace the failed physical disk and recreate the simple space, and then restore the
lost data from backup. Striping can improve performance because it’s possible to access
multiple segments of data simultaneously. You must set up a minimum of two disks if you
want to enable data striping.
o
Two-way and three-way mirrors. Helps protect against disk loss, because two data copies
are kept by two-way mirrors of hosted data, while three-way mirrors keep three copies.
Duplication occurs with every write action, which means that data copies remain current.
Additionally, data is striped by mirror spaces across several physical drives. You must
configure at least two physical disks if you want to use mirroring. It’s important to note that
a disadvantage of using mirroring is that the data duplicates on multiple disks, so disk
usage is inefficient.
o
Parity. Resembles a simple space because data writes across multiple disks. However,
parity information also writes across disks when you use a parity storage layout. You can
use the parity information to calculate data if you lose a disk. When a drive fails, if you
have parity configured, it ensures that the Storage Spaces feature keeps performing readand-write requests. The parity information always rotates across available disks to enable
input/output (I/O) optimization. You must have at least three physical drives if you want
the Storage Spaces feature to utilize parity spaces, for which resiliency increases when
journaling is used. Additionally, redundancy is supplied by the parity storage layout and it’s
more efficient than mirroring with respect to utilizing disk space.
Note: The number of disks needed in a storage space can be affected by how many
columns there are.


Disk sector size. When you create a storage pool, the size of its sector is set, and the default
sizes for the:
o
Pool sector size is 512e provided the list of drives you’re using have only 512 and 512e
drives. 512 byte sectors are used by a 512 disk, while a 512e drive is a hard disk that has
4,096 byte sectors and mimics 512 byte sectors.
o
Pool sector size is set to 4 KB if the list has at least one 4 kilobyte (KB) drive.
Cluster disk requirement. If a computer fails, the Failover Clustering feature averts work
stoppages. You must ensure that all of your pool’s drives support SAS if you want your pool to
support the failover clustering.
127
20740 Installation, Storage, and Compute with Windows Server


Drive allocation. Determines how a drive allocates to the pool, and options are:
o
Data-store. When a drive is added to a pool, this is the default allocation. Space on datastore drives can be selected automatically by Storage Spaces for storage-space creation
and JIT allocation.
o
Manual drive. Isn’t used as part of a storage space unless it’s specifically selected when
you create that storage space. You can use this drive-allocation property to specify which
types of drives can be used by specific Storage Spaces.
o
Hot spare. Are reserve drives that aren’t used in a storage space’s creation but are added
to a pool. If a drive that’s hosting storage-space columns fails, one of these reserve drives
is used to replace the failed drive.
Provisioning schemes. There are two ways to provision a virtual disk, including:
o
Thin provisioning space. Enables storage to be allocated readily on a just-enough and JIT
basis. A pool’s storage capacity is organized into provisioning slabs that aren’t allocated
until datasets require storage. Thin provisioning trims unused storage to reclaim it, which
is in contrast to a more traditional fixed-storage approach where there might be significant
storage capacity allocated that goes unused.
o
Fixed provisioning space. Uses flexible provisioning slabs, but allocates storage when
the space is created initially. Within the same storage pool, thin and fixed provisioning
virtual disks can be created, which is convenient, especially when they relate to the same
workload. As an example, a thin provisioning space could be designated for a shared folder
that contains user files, while you could designate a fixed provisioning space if high disk
I/O is required by your database.
o
Stripe parameters. You can increase a virtual disk’s performance by striping data across
multiple physical disks. There are two parameters you can use to configure a stripe when
you’re setting up a virtual disk: NumberOfColumns and Interleave. A stripe is where a pass
of data written to a storage space occurs. Data is written in multiple passes or stripes.
The physical disks across which a data stripe is written are called columns, and the
amount of data that’s written to a column, per stripe, is called interleave, and you can
specify the width of a stripe by using the NumberOfColumns and Interleave parameters
(stripe \ width = NumberOfColumns and Interleave). If you’re using parity spaces, you’d
use the stripe width to calculate how much data and parity Storages Spaces writes across
disks to increase available performance. Use the Windows PowerShell NewVirtualDisk cmdlet with the NumberOfColumns and Interleave parameters if you want
to manage the number of columns and stripe interleave when you’re creating a new
virtual disk.
The Storage Spaces feature is able to utilize any DAS device, such as SATA or SAS drives, as well
as integrated drive electronics (IDE) and SCSI drives when pools are created.
There are several factors to consider when planning Storage Spaces subsystems, including:

Fault tolerance. If a physical disk fails, does data still need to be available? If so, you must use
multiple physical disks and provision virtual disks by using mirroring or parity.
128
20740 Installation, Storage, and Compute with Windows Server

Performance. Use the parity layout if you want better performance for read and write actions
and remember to factor in each individual physical disk’s speed, as this impacts performance .
Different disk types also can be used if you want to have tiered storage. For example, if you
have data for which you need to provide fast and frequent access, use SSDs, and then use
SATA drives if there’s data that doesn’t need to be accessed as frequently.

Reliability. Using a parity layout for your virtual disks offers some reliability, which you can
improve by using hot-spare physical disks in case a physical disk fails.

Future storage expansion. Storage Spaces is beneficial because you can add physical disks to
a storage pool to expand your storage and offer fault tolerance.
Changes to file and storage services in Windows Server
2022
Windows Server 2022 has introduced several storage improvements since Windows Server 2019.
These include:


Storage Migration Service. It’s now easier to migrate storage from more locations to Windows
Server or Azure. You can now use the Storage Migration Server orchestrator on Windows
Server 2022 to perform numerous migrations, including:
o
Migrating between standalone servers and failover clusters.
o
Migrating from failover clusters.
o
Migrating storage to failover clusters.
o
Migrating storage from a Linux server that uses Samba.
o
Using Azure File Sync to synchronize migrated shares into Azure more easily.
Adjustable storage repair speed. This feature is part of Storage Spaces Direct and provides
more control over the repair resync process.

Faster repair and resynchronization after disk failures.

Storage bus cache with Storage Spaces on standalone servers.

ReFS file-level snapshots.
Storage Spaces usage scenarios
The Storage Spaces feature enables organizations to implement a wide variety of storage
solutions. Depending on an organization’s business requirements, it’ll manage multiple physical
disks, pool them in different configuration options, and make the disk space available to users and
applications. Scenarios in which you might use the Storage Spaces feature include when you need:

Easily implemented and managed storage that’s scalable, reliable, and inexpensive. This is
often a good choice for small or medium size organizations.

To aggregate individual drives into storage pools, which you then can manage as a single
entity.

Inexpensive storage with or without external storage.
129
20740 Installation, Storage, and Compute with Windows Server

Different types of storage in the same pool, such as SATA, SAS, USB, and SCSI.

The ability to grow storage pools as required.

To provision storage when required from previously created storage pools.

To designate specific drives as hot spares.

To automatically repair pools containing hot spares.

To delegate administration by pool.

To use existing tools for backup and restore and use VSS for snapshots.

To manage either locally or remotely, by using MMC or Windows PowerShell.

To utilize Storage Spaces with Failover Clusters.
Note: USB is a supported storage option, but it’s typically more practical when used on a
Windows client or while developing a proof of concept. USB performance also depends on
the performance capabilities of the storage types you choose to pool.
When planning for Storage Spaces, you should consider that:

Storage Spaces volumes aren’t supported on boot or system volumes.

You should add only drives that are unformatted or nonpartitioned.

A storage pool must include at least one drive.

There are distinct requirements for fault-tolerant configurations, including that:
o
A minimum of two drives is necessary for a mirrored pool.
o
A minimum of three drives is necessary for parity.
o
A minimum of five drives is necessary for three-way mirroring.

All drives in a pool must use the same sector size.

Storage layers that abstract the physical disks aren't compatible with Storage Spaces,
including:
o
o
Virtual hard disks (VHDs) and pass-through disks in a virtual machine (VM).
Storage subsystems that are deployed in a different RAID layer.

Fibre Channel and iSCSI aren’t supported.

Failover Clusters can use only SAS as a storage medium.
Note: Microsoft Support provides troubleshooting assistance only in environments where you
deploy Storage Spaces on a physical machine, not a VM. Additionally, Microsoft must certify
just a bunch of disks (JBOD) hardware solutions that you implement.
130
20740 Installation, Storage, and Compute with Windows Server
When planning for a workload’s reliability in your environment, Storage Spaces provides different
resiliency types. As a result, some workloads are better suited for specific resilient scenarios.
Table 1 depicts these recommended workload types:
Table 10: Recommended workload types
Resiliency type
Number of data
copies maintained
Workload recommendations
Mirror
Two (two-way mirror)
Recommended for all workloads.
Parity
Three (three-way
mirror)
Sequential workloads with large units of read/write,
such as archival.
Simple
Three (single parity)
Use with workloads that don't need resiliency or provide
an alternate resiliency mechanism.
Provision Storage Spaces
During a storage space’s provisioning, virtual disks are created from the storage pools. The virtualdisk creation process requires that you perform additional tasks, such as configuring disk-sector
size, drive allocation, and the provisioning scheme.
Disk-sector size
A storage pool’s disk sector size is configured during the virtual-disk creation process. If you
use only 512 and/or 512e drives, the pool defaults to 512e. A 512 drive uses 512-byte sectors.
A 512e drive is a hard disk with 4,096-byte sectors that emulates 512-byte sectors. If the
list contains at least one 4 KB drive, the pool sector size is 4 KB by default. Optionally, an
administrator can explicitly define the sector size that all contained spaces in the pool inherit.
After an administrator defines the sector size, the Windows OS only permits users to add drives
that have a compliant sector size, meaning 512 or 512e for a 512e storage pool, and 512, 512e,
or 4 KB for a 4 -KB pool.
Drive allocation
During the Storage Spaces provisioning, you’ll need to configure how the storage pool will allocate
drives. Allocation options include:

Automatic. Is the default setting whenever you add a drive to a pool. Storage Spaces can
automatically select available capacity on data-store drives for both storage-space creation
and JIT allocation.
131
20740 Installation, Storage, and Compute with Windows Server

Manual. Use when you want to specify that only certain Storage Spaces can use specific
drive types.

Hot spare. Use hot-spare is so that a Windows Server has a hot-spare reserve drive to replace
a drive that fails. Drives that you add as hot spares to a pool are reserved drives that the
Storage Spaces won’t use when creating a storage space.
Provisioning schemes
When you’re provisioning storage, you must select one of the following provisioning schemes:

Thin provisioning space. A mechanism that enables Storage Spaces to allocate storage as
necessary. When you select thin provisioning, the storage pool organizes storage capacity into
provisioning slabs but doesn’t allocate them until datasets grow to the required storage size.
This way, thin provisioning optimizes the utilization of available storage and saves on operating
costs such as electricity, which is required to keep unused drives in operation. However, lower
disk performance is a downside to using thin provisioning.

Fixed provisioning space. With Storage Spaces, fixed provisioned spaces also employ flexible
provisioning slabs. Unlike thin provisioning, in a fixed provisioning space, Storage Spaces
allocates storage capacity at the time that you create the storage space. If using fixed
provisioning space, you must calculate the required storage capacity correctly so you don’t
avoid allocating large pools of storage capacity that might remain unused.
Demonstration: Configure Storage Spaces
For high-level demonstration steps, refer to the Notes, which are in the accompanying Microsoft
PowerPoint presentation.
Discussion: Compare Storage Spaces to other storage
solutions
Be prepared to discuss these questions with the class:
1. What are the advantages of using Storage Spaces compared to using SANs or NAS?
2. What are the disadvantages of using Storage Spaces compared to using SANs or NAS?
3. In what scenarios would you recommend each option?
Lesson 2: Manage Storage Spaces
After you implement and configure your Storage Spaces, you’ll need to maintain them. This might
include knowing how to expand your storage pool, how to mitigate disk failure with Storage
Spaces, and to review logs and performance counters.
132
20740 Installation, Storage, and Compute with Windows Server
By completing this lesson, you’ll achieve the knowledge and skills to:

Manage storage spaces and storage pools.

Describe event logs and performance counters.
Manage Storage Spaces
You can manage Storage Spaces using several tools, including:

Server Manager.

Windows PowerShell.

Failover Cluster Manager.

System Center Virtual Machine Manager (SCVMM).

Windows Management Instrumentation (WMI).
Use Server Manager
You can use Server Manager to perform basic management of your virtual disks and storage pools,
including to:

Create storage pools.

Add and remove physical disks from pools.

Add, create, manage, and delete virtual disks.
Use Windows PowerShell
Windows PowerShell includes numerous advanced-management options for virtual disks. Table 2
describes some of the more common commands:
Table 11: Windows PowerShell commands for managing storage
Command
Description
Get-StoragePool
Lists storage pools.
Get-VirtualDisk
Lists virtual disks.
Repair-VirtualDisk
Repairs a virtual disk.
Get-PhysicalDisk |
Where {$_.HealthStatus
-ne “Healthy”}
Lists unhealthy physical disks.
133
20740 Installation, Storage, and Compute with Windows Server
Command
Description
Get-VirtualDisk | Get
PhysicalDisk
Lists physical disks that are used for a virtual disk.
Optimize-Volume
Optimizes a volume, performing tasks such as defragmentation,
trim, slab consolidation, and storage-tier processing.
Manage disk failure with Storage Spaces
One way to mitigate disk failure with Storage Spaces is to plan carefully before deployment.
It’s inevitable that disks fail but recognizing this fact and planning to mitigate it can provide you
with a reliable foundation for your storage solution. Consider the following when planning your
Storage Spaces:


Plan for fault-tolerance, including:
o
Provisioning of two-way mirror or single-parity storage spaces.
o
Provisioning redundant connections between your file servers and each JBOD node.
o
Implementing sufficient JBOD enclosures to tolerate an entire JBOD failure.
o
Use fixed provisioning in all storage spaces in your storage pool.
o
Use at least five disks for three-way mirror spaces.
o
Implementing file-server clustering.
o
Provisioning redundant network adapters and network switches.
Deploy highly available storage pools. In Storage Spaces, deploy parity or mirrored virtual
disks. However, remember that physical disks connect to a single system which itself is a
single point of failure. You can mitigate this because Storage Spaces supports the creation
of a clustered storage pool. However, to cluster your Storage Space, there are requirements
your environment must meet, including that you must:
o
o
o

Use three (or more) physical disks to implement two-way mirror spaces.
Connect all physical disks in a clustered pool to a SAS.
Ensure all physical disks pass the failover-cluster validation tests and support persistent
reservations.
It’s worth considering that many of the problems you might encounter with Storage Spaces
occur because of the use of incompatible hardware or, sometimes, firmware issues. To avoid
these issues, consider the following:
o
Only use certified SAS-connected JBODs.
o
Ensure that you install the most recent drivers and firmware on your disks.
o
Avoid mixing and matching disk models within a specific JBOD.
134
20740 Installation, Storage, and Compute with Windows Server

Retire missing disks automatically. If another disk fails before you’ve rectified a missing disk
problem, integrity can be compromised. By default, a missing disk is just marked as missing.
By changing the RetireMissingPhysicalDisks policy to Always, you cause virtual disk-repair
operations to initiate automatically, which helps restore the pool’s health quickly.

Remember that you must:
o

o
Remove a failed disk to correct the problem in a virtual disk or storage pool.
You must add a new disk to the pool to replace a failed disk.
Avoid data loss by replacing a physical disk before you remove a drive from your storage pool.

Keep spare capacity in your pool for virtual disk repairs.

Plan for multiple disk failures.

Provide fault tolerance at the enclosure level.
Storage pool expansion
A key benefit of using Storage Spaces is its flexibility in adding additional storage. However, it’s
important that you investigate how your storage is being used across the disks that comprise
your pool before you add more storage. This is because the blocks from your virtual disks are
distributed across physical disks based on the layout you choose at deployment.
Let’s examine an example.
In Figure 1, there are five disks in the storage pool, but disk1 is larger than the other four. Two
virtual disks, Vdisk1 and Vdisk2 consume space across the disks. Specifically, Vdisk1 uses space
across all disks, while Vdisk2 uses space only from Disk1 through Disk 3.
Figure 14: A five-disk storage pool
135
20740 Installation, Storage, and Compute with Windows Server
In Figure 2, a sixth disk is added to the storage pool:
Figure 15: A six-disk storage pool
If you try to extend Vdisk1, you’ll fail. The layout of Vdisk1 means you can’t use available space on
Disk 6, because during creation, Vdisk1 was configured to require five disks. You’d need to add
another four disks to be successful. But if you tried to extend Vdisk2, you’d be successful right
away. Again, this is because the initial configuration required three disks. For example, Vdisk2
might just be a virtual disk configured to use two-way mirroring.
Therefore, before you can add storage, you must determine usage. In other words, determine the
current distribution of blocks across your storage devices. This is known as determining column
usage. You can use the Windows PowerShell Get-VirtualDisk cmdlet to do this.
You can then consider expanding your storage pool using one of the following methods:

In Server Manager, select File and Storage Services, and then select Storage Pools. Add a
physical disk by selecting the pool’s properties, and then selecting Add Physical Disk.

In Windows PowerShell, use the Add-PhysicalDisk cmdlet to add a physical disk to the
storage pool. For example: Add-PhysicalDisk –VirtualDiskFriendlyName UserData –
PhysicalDisks (Get-PhysicalDisk -FriendlyName PhysicalDisk3,
PhysicalDisk4).
Demonstration: Manage Storage Spaces by using
Windows PowerShell
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
136
20740 Installation, Storage, and Compute with Windows Server
Event logs and performance counters
You should monitor your storage solution carefully. In Windows Server, you can use event logs to
assist with this process. Table 3 describes common event IDs:
Table 12: Common storage events
Event ID
Message
Possible cause
100
Physical drive %1 failed to read the
configuration or returned corrupt data
for storage pool %2. As a result, the inmemory configuration might not be the
most recent copy of the configuration.
Return Code: %3.
A physical drive can fail to read the
configuration or return corrupt data
for a storage pool because:



102
Majority of the physical drives of storage
pool %1 failed a configuration update,
which caused the pool to go into a failed
state. Return Code: %2.
The physical drive might fail
requests with device I/O errors.
The physical drive might contain
corrupted storage pool
configuration data.
The physical drive might contain
insufficient memory resources.
A write failure might occur when
writing a storage-pool configuration to
physical drives because:



Physical drives might fail
requests with device I/O errors.
An insufficient number of
physical drives are online and
updated with their latest
configurations.
The physical drive might contain
insufficient memory resources.
103
The capacity consumption of the
storage pool %1 has exceeded the
threshold limit set on the pool. Return
Code: %2.
The storage pool’s capacity
consumption has exceeded the
threshold limit set on the pool.
104
The capacity consumption of the
storage pool %1 is now below the
threshold limit set on the pool. Return
Code: %2.
The storage pool’s capacity
consumption returns to a level that’s
below the threshold limit set on the
pool.
137
20740 Installation, Storage, and Compute with Windows Server
Event ID
Message
Possible cause
200
Windows was unable to read the drive
header for physical drive %1. If you
know the drive is still usable, then
resetting the drive health by using the
command line or GUI might clear this
failure condition and enable you to
reassign the drive to its storage pool.
Return Code: %2.
Windows couldn’t read a physical
drive’s header.
201
Physical drive %1 has invalid meta-data.
Resetting the health status by using the
command line or GUI might bring the
physical drive to the primordial pool.
Return Code: %2.
A physical drive’s metadata is corrupt.
202
Physical drive %1 has invalid meta-data.
Resetting the health status by using the
command line or GUI might resolve the
issue. Return Code: %2.
A physical drive’s metadata is corrupt.
203
An I/O failure has occurred on Physical
drive %1. Return Code: %2.
A physical drive has experienced an
I/O failure.
300
Physical drive %1 failed to read the
configuration or returned corrupt data
for storage space %2. As a result, the inmemory configuration might not be the
most recent copy of the configuration.
Return Code: %3.
A physical drive can fail to read the
configuration or return corrupt data
because:



138
The physical drive might fail
requests with device I/O errors.
The physical drive might contain
corrupted storage-space
configuration data.
The physical drive might contain
insufficient memory resources.
20740 Installation, Storage, and Compute with Windows Server
Event ID
Message
Possible cause
301
All pool drives failed to read the
configuration or returned corrupt data
for storage space %1. As a result, the
storage space won’t attach. Return
Code: %2.
You can experience all physical drives
failing to read their configuration or
returning corrupt data for storage
spaces because:



302
Majority of the pool drives hosting space
meta-data for storage space %1 failed a
space meta-data update, which caused
the storage pool to go in failed state.
Return Code: %2.
Physical drives might fail
requests with device I/O errors.
Physical drives might contain
corrupted storage pool
configuration data.
The physical drive might contain
insufficient memory resources.
A majority of the pool drives hosting a
storage space’s metadata can fail a
metadata update because:



Physical drives might fail
requests with device I/O errors.
There’s an Insufficient number
of physical drives with online
storage space metadata.
The physical drive might contain
insufficient memory resources.
303
Drives hosting data for storage space
have failed or are missing. As a result,
no copy of data is available. Return
Code: %2.
Happens if a storage-pool drive fails
or is removed.
304
One or more drives hosting data for
storage space %1 have failed or are
missing. As a result, at least one copy of
data isn’t available. However, at least
one copy of data is still available. Return
Code: %2.
One or more drives hosting a storage
space’s data have failed or are
missing, which means that at least
one copy of data isn’t available.
However, at least one copy of the data
is still available.
306
The attempt to map or allocate more
storage for the storage space %1 has
failed. This is because there was a write
failure involved in updating the storage
space metadata. Return Code: %2.
More physical drives are required
because mapping or allocating more
storage failed.
139
20740 Installation, Storage, and Compute with Windows Server
Event ID
Message
Possible cause
307
The attempt to unmap or trim the
storage space %1 has failed. Return
Code: %2.
Unmapping or trimming of the
specified storage space failed.
308
The driver initiated a repair attempt for
storage space %1. Return Code: %2.
No additional action is necessary, as
the driver initiated a repair for the
storage space, which is normal.
It’s also important to monitor your storage’s performance. There are numerous components
that deal with storage requests in your storage architecture, including:

File cache management.

File system architecture.

Volume management.

Physical storage hardware.

Storage Spaces configuration options.
You can use Performance Monitor and Windows PowerShell to monitor storage-pool performance.
For example, use Windows PowerShell to generate and collect performance data by running the
following PowerShell command:
Measure-StorageSpacesPhysicalDiskPerformance -StorageSpaceFriendlyName
ContosoStorageSpace1 -MaxNumberOfSamples 90 -SecondsBetweenSamples 5 ReplaceExistingResultsFile -ResultsFilePath ContosoStorageSpace1.blg SpacetoPDMappingPath ContosoPDMap.csv
This cmdlet:

Monitors the performance of all physical disks associated with the storage space named
ContosoStorageSpace1.

Captures performance data for 90 seconds at five-second intervals.

Replaces the results files if they already exist.

Stores the performance log in the file named ContosoStorageSpace1.blg.

Stores the physical disk-mapping information in a file named ContosoPDMap.csv.
You can then use Performance Monitor to review the data collected you collected in the two files
specified:

ContosoStorageSpace1.blg

ContosoPDMap.csv
140
20740 Installation, Storage, and Compute with Windows Server
Lesson 3: Implement Data Deduplication
Data Deduplication is Windows Server 2022 functionality that optimizes a volume’s free space by
searching for duplicated portions and storing them only once. Implementing Data Deduplication
enables an organization to store more data and use less physical disk space.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe Data Deduplication and its components.

Implement and monitor Data Deduplication.

Describe considerations for backup and restore with Data Deduplication.
What is Data Deduplication?
The Data Deduplication feature enables organizations to reduce redundant data by examining a
volume’s data, searching for duplicated portions and storing duplicate data only once, and then
optionally compressing duplicates for additional data savings. Data Deduplication optimizes
redundancies without compromising data fidelity or integrity.
In Windows Server, Data Deduplication transparently removes duplication without changing access
semantics. When you enable Data Deduplication on a volume, a post-process (target)
deduplication is used to optimize the file data on the volume by performing the following actions:

Processes the files on the volume by using optimization jobs, which are background tasks run
with low priority on the server.

Uses an algorithm to segment all file data on the volume into small, variable-sized chunks that
range from 32 to 128 KB.

Identifies chunks that have one or more duplicates on the volume.

Inserts chunks into a common chunk store.

Replaces all duplicate chunks with a reference (or stub) to a single data-chunk copy in the
chunk store.

Replaces the original files with a reparse point, which contains references to its data chunks.

Compresses chunks and organizes them in container files in the System Volume
Information folder.

Removes primary data stream of the files.
Data Deduplication doesn’t impact write performance because the data isn’t deduplicated while
the file is being written. Windows Server uses post-process deduplication through scheduled
tasks on the local server, but you can run the process interactively by using Windows PowerShell.
However, there’s a small performance impact when reading deduplicated files.
Data Deduplication can potentially process all the data on a selected volume, except for files
that are less than 32 KB in size and files in excluded folders. Before enabling deduplication,
administrators must evaluate whether a server’s volumes are suitable candidates for
deduplication. We also recommend that administrators back up data regularly.
141
20740 Installation, Storage, and Compute with Windows Server
There are several elements that a volume contains after a volume is configured for deduplication
and data is optimized, including:

Unoptimized files. Includes:
o
Files that don’t meet the selected file-age policy setting.
o
Alternate data streams.
o
Files with extended attributes.
o
Other reparse point files.
o
System state files.
o
Encrypted files.
o
Files smaller than 32 KB.

Optimized files. Includes files that are stored as reparse points and that contain map pointers
chunk-store items that are required for file restoration as requested.

Chunk store. Is the location for the optimized file data.

Additional free space. As a result of the data optimization, the optimized files and chunk store
occupy much less space than they did prior to optimization.
ReFS also supports data deduplication and includes a new store that can contain up to 10 times
more data on the same volume when deduplication is applied. ReFS supports volumes up to 64
terabytes (TBs) and deduplicates the first 4 TBs of each file. It uses a variable-size chunk store
that includes optional compression to maximize savings rates, while the multithreaded, postprocessing architecture keeps performance impact minimal.
Data Deduplication components
Data Deduplication reduces disk utilization by scanning files, dividing those files into chunks, and
then retaining only one copy of each chunk. After deduplication, files are no longer stored as
independent data streams. Instead, Data Deduplication replaces the files with stubs that point to
data blocks that it stores in a common chunk store. The process of accessing deduplicated data is
completely transparent to users and apps. You might find that Data Duplication increases overall
disk performance. Multiple files can share one chunk cached in memory. Therefore, that chunk is
read from disk less often.
To provide better disk performance, Data Deduplication doesn’t run in real time but rather as a
scheduled task. By default, optimization runs one time per hour as a background task.
The Data Deduplication role service consists of several components, including:

Filter driver. Monitors local or remote I/O and manages the data chunks on the file system by
interacting with the various jobs and includes one filter driver for every volume.
142
20740 Installation, Storage, and Compute with Windows Server

Deduplication service. Manages the following job types:
o
Optimization job. Deduplicates and compresses files according to the volume’s data
deduplication policy. After a file’s initial optimization, if it’s then modified and the
optimization threshold is met that the data deduplication policy specifies, the file is
optimized.
o
Garbage collection. Cleans up data that isn’t being referenced by processing the volume’s
deleted or modified data. Also creates usable volume-free space by processing deleted or
logically overwritten optimized content. It’s important to note that old data in the chunk
stores doesn’t get deleted right away if an optimized file is deleted or overwritten by new
data. Garbage collection is scheduled to run weekly, but you might consider running
garbage collection only after large deletions have occurred.
o
Integrity Scrubbing. Performs data-integrity verification, such as checksum validation and
metadata consistency checking. There’s also built-in redundancy for critical metadata and
popular data chunks. As data is accessed or deduplication jobs process data, if these
features encounter corruption, they record it in a log file. Scrubbing jobs use these features
to analyze the chunk-store corruption logs, and when possible, to make repairs. Possible
repair operations include using the following three sources of redundant data:
•
Backup copies. Deduplication keeps backup copies of popular chunks, which are those
that are referenced more than over 100 times, in an area called the hotspot. When soft
damage, such as bit flips or torn writes, occurs to the working copy, deduplication uses
its redundant copy.
•
Mirror image. If using mirrored Storage Spaces, deduplication can use the redundant
chunk’s mirror image to serve the I/O and fix the corruption.
•
New chunk. When a chunk that’s corrupt is included in a file that’s processed, that bad
chunk is deleted and the corruption is fixed using the new incoming chunk.
Note: Initial indicators of corrupt data in hardware or the file system are typically
reported first by the deduplication subsystem because of the additional validations
built into it.
•
Unoptimization. Undoes deduplication on all optimized files on a volume, and you
should only perform this job manually. Common scenarios for using this type of job
include decommissioning a server with volumes enabled for Data Deduplication,
troubleshooting issues with deduplicated data, or migration of data to another system
that doesn’t support Data Deduplication. Before you start this job, use the DisableDedupVolume Windows PowerShell cmdlet to disable further deduplication activity on
one or more volumes.
After you disable Data Deduplication, the volume remains in the deduplicated state,
and the existing deduplicated data remains accessible. However, the server stops
running optimization jobs for the volume, and it doesn’t deduplicate the new data.
Afterwards, you would use the unoptimization job to undo the existing deduplicated
data on a volume. At the end of a successful deoptimization job, all the data
deduplication metadata is deleted from the volume.
143
20740 Installation, Storage, and Compute with Windows Server
Note: Be cautious when using the deoptimization job because all deduplicated
data will return to the original logical file size, which means you should verify if the
volume has enough free space for storing duplicated data.
Deploy Data Deduplication
Before deploying Data Deduplication, we recommend that you use the Deduplication Evaluation
Tool, DDPEval.exe, to determine the expected savings you’ll achieve if you enable deduplication
for a specific volume. This tool, which evaluates local drives and mapped or unmapped remote
shares, is installed automatically in the \Windows\System32\ directory when you enable the
Data Deduplication service.
When the Data Deduplication service is enabled, it creates default deduplication policy settings,
which are usually sufficient for most environments. However, you can customize them if your
organization has specific requirements, including that:

You need to process files more quickly and you know that incoming data is static or read-only.
If this is the case, you can specify a smaller number of days at which to process files by
modifying the MinimumFileAgeDays setting.

There are directories that you don’t want to deduplicate. Add a directory to the exclusion list.

You have file types that you don’t want to deduplicate. You can add a file type to the
exclusion list.

Off-peak hours are different on the server compared to the default setting, and you need to
modify the Garbage Collection and Scrubbing schedules. You can use Powershell to change
the schedules.
Install and configure Data Deduplication
Deploying the Data Deduplication service includes the following steps:
1. Install Data Deduplication components on the server. To install deduplication components on
the server, use one of the following options:
o
Server Manager. Install Data Deduplication by navigating to the Add Roles and Features
Wizard. Under Server Roles, select File and Storage Services, select the File
Services checkbox, select the Data Deduplication checkbox, and then select Install.
o
Windows PowerShell. Use the following Windows PowerShell command to install Data
Deduplication:
Add-WindowsFeature -Name FS-Data-Deduplication
144
20740 Installation, Storage, and Compute with Windows Server
2. Enable Data Deduplication. Use the following options to enable Data Deduplication on the
server:
o
o
Server Manager. From the Server Manager dashboard:
•
Right-click or access the context menu for a data volume, and then select Configure
Data Deduplication.
•
In the Data deduplication box, select the workload you want to host on the volume. For
example, select General purpose file server for general data files or Virtual Desktop
Infrastructure (VDI) server when configuring storage for running VMs.
•
Enter the minimum number of days that should elapse from the file-creation date
before files are deduplicated.
•
Enter the extensions of any file types that shouldn’t be deduplicated.
•
Select Add to browse to any folders with files that shouldn’t be deduplicated.
•
Select Apply to apply these settings and return to the Server Manager dashboard or
select the Set Deduplication Schedule button to establish a deduplication schedule.
Windows PowerShell. Use the following command to enable deduplication on a volume:
Enable-DedupVolume –Volume VolumeLetter –UsageType StorageType
You should replace VolumeLetter with the volume’s drive letter and StorageType with the
value corresponding to the volume’s expected type of workload. Acceptable values include:
o
A volume for Hyper-V storage.
o
A general-purpose volume.
o
The minimum number of days that pass by, from the file-creation date, before they’re
deduplicated.
o
The extensions of any file types that shouldn’t be deduplicated.
o
A volume that’s optimized for virtualized backup servers.
You can also use the Windows PowerShell cmdlet Set-DedupVolume to configure more
options, such as:
o
The folders to exclude from deduplication.
3. Configure Data Deduplication jobs. You can run Data Deduplication jobs manually, on demand,
or use a schedule. The following are the types of jobs which you can perform on a volume:
o
Optimization. Includes built-in jobs that are scheduled automatically so that optimization
occurs on a periodic basis. Policy settings specify how and when optimization jobs
deduplicate a volume’s data and compress file chunks, and you also can start an
optimization job manually by running the following command:
Start-DedupJob –Volume _VolumeLetter –Type Optimization
o
Data scrubbing. Scheduled automatically to analyze the volume on a weekly basis and
produce a summary report in the Windows event log. Start a scrubbing job manually by
running the following command:
Start-DedupJob –Volume _VolumeLetter –Type Scrubbing
145
20740 Installation, Storage, and Compute with Windows Server
o
Garbage collection. Scheduled automatically to process data on the volume on a weekly
basis. Garbage collection is a processing-intensive operation, so consider waiting until after
the deletion load reaches a threshold to run this job on demand or schedule the job for
after hours. Start a garbage-collection job manually by running the following command:
Start-DedupJob –Volume _VolumeLetter –Type GarbageCollection
o
Unoptimization. Available on an as-needed basis and aren’t scheduled automatically.
However, you can use the following command to trigger an unoptimization job on demand:
Start-DedupJob –Volume _VolumeLetter –Type Unoptimization
4. Configure Data Deduplication schedules. When you enable Data Deduplication on a server,
three schedules are enabled by default: optimization is scheduled to run every hour, and
garbage collection and scrubbing are scheduled to run one time per week. You can access
the schedules by using the Windows PowerShell cmdlet Get-DedupSchedule.
Scheduled jobs run on all server volumes, so if there’s a specific volume on which you want
to run a job, you can create a new job. You can create, modify, or delete job schedules from
the Deduplication Settings page in Server Manager, or by using the Windows PowerShell
cmdlets: New-DedupSchedule, Set-DedupSchedule, or Remove-DedupSchedule.
Demonstration: Implement Data Deduplication
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
Usage scenarios for Data Deduplication
Savings that organizations can achieve by using Data Deduplication depend on the type of data
they’re storing on servers and volume sizes. We recommend that you use the Deduplication
Evaluation Tool before you enable deduplication to calculate volume savings.
Categories of data that have a common savings ratio include:

User documents. Includes group-content publication or sharing, user home folders, and profile
redirection for accessing offline files. When enabled, Data Deduplication can save between 30
to 50 percent of your storage space.

Software deployment shares. Includes software binaries, cab files, symbols files, images, and
updates. When enabled, Data Deduplication can save 70 to 80 percent of your storage space.

Virtualization libraries. Includes storage of VHD files, such as .vhd and .vhdx files, for
provisioning to hypervisors. When enabled, Data Deduplication can save up to 80 to 95
percent of your storage space.

General file share. Includes a variety of all the previously identified data types. When enabled,
Data Deduplication can save 50 to 60 percent of your storage space.
146
20740 Installation, Storage, and Compute with Windows Server
Based on potentials savings and typical resource usage in Windows Server, deployment
candidates for deduplication are ranked as follows:

Ideal candidates for deduplication:
o
General purpose file servers.
o
VM deployments.
o
Virtualization depot or provisioning library.
o
Microsoft SQL Server and Exchange Server backup volumes.
o
Virtualized backup VHDs, such as Microsoft System Center Data Protection Manager.
o
LOB servers.
o
Web servers.
o
VDI VHDs (only personal VDIs).
o
Backup servers.
o
Software deployment shares.
o


Scale-Out File Server (SOFS) Cluster Shared Volumes (CSVs).
Candidates that must be evaluated based on content:
o
Static content providers.
o
High-performance computing (HPC).
o
Microsoft Hyper-V hosts.
Candidates that aren’t ideal for deduplication:
o
o
Windows Server Update Service (WSUS).
SQL Server and Exchange Server database volumes.
Note: Files that often change and are accessed frequently by users or applications aren’t
good candidates for deduplication because of the potential for constant data changes. Data
deduplication might not be able to process the files efficiently enough. Servers that are
recommended for deduplication don’t change files often, thereby allowing time for data
deduplication to process the files.
Deduplication requires reading, processing, and writing large amounts of data, which consumes
server resources. Therefore, a server that’s constantly at maximum resource capacity might not be
an ideal candidate for deduplication.
147
20740 Installation, Storage, and Compute with Windows Server
Data Deduplication interoperability
In Windows Server, you should consider the following related technologies that also support Data
Deduplication:

Windows BranchCache. Supports Data Deduplication such that if a BranchCache-enabled
server communicates over a WAN with a remote file server that’s enabled for Data
Deduplication, all deduplicated files are already indexed and hashed, so requests for
data from a branch office are quickly computed.

Failover Clusters. Support Data Deduplication such that each node in a cluster must be
running the Data Deduplication feature so that when a cluster is formed, the Deduplication
schedule information is configured in it. As a result, if another node takes over a deduplicated
volume, the scheduled jobs will be applied on the next scheduled interval by the new node.

FSRM quotas. Support Data Deduplication such that administrators can create a soft quota
on a volume root that’s enabled for deduplication. When File Server Resource Manager (FSRM)
encounters a deduplicated file, it’ll identify the file’s logical size for quota calculations.
Consequently, quota usage (including any quota thresholds) doesn’t change when
deduplication processes a file. If you’re using deduplication, all FSRM quota functionality
will work as you expect. This includes volume-root soft quotas and quotas on subfolders.
Note: A soft quota doesn’t enforce the quota limit but generates a notification when the
data on the volume reaches the threshold. Users can’t save files once the quote limit is
reached if you configure a hard quota. A hard quota shouldn’t be enabled on a volume root
folder that’s enabled for deduplication, because the volume’s actual free space and quotarestricted space won’t be the same.

DFS Replication. Supports Data Deduplication such that administrators can optimize files on
the replica instance by using deduplication if the replica is enabled for Data Deduplication.
Monitor and maintain Data Deduplication
After you deploy Data Deduplication, you should continue to monitor the systems you enabled for
Data Deduplication to ensure optimized performance.
Monitor and report Data Deduplication
At some point, you’re likely to wonder how big your deduped volumes should be. This depends on
several factors, including your specific workloads and hardware specifications. To a great degree,
this means how much data changes and how frequently, which will impact the disk storage
subsystem’s throughput. Consider the following when attempting to estimate volume size:

Deduplication optimization must be able to keep up with the daily data churn.

The total amount of churn relates directly to volume size.
148
20740 Installation, Storage, and Compute with Windows Server

The speed of deduplication optimization depends on the disk storage subsystem’s throughput
rates.
Therefore, to determine the maximum size for your deduplicated volume, you must be able to
accurately estimate the data churn size and the speed of optimization processing on your volumes.
To monitor deduplication and to report on its health, consider using the following options:

Windows PowerShell cmdlets. After you enable Data Deduplication, use the following Windows
PowerShell cmdlets:
o
o
Get-DedupStatus. Supplies deduplication status for volumes that have data
deduplication metadata, including:
•
Deduplication rate.
•
Number/sizes of optimized files.
•
Last run-time of the deduplication jobs.
•
Amount of space saved on the volume.
Get-DedupVolume. Returns the deduplication status for volumes that have data
deduplication metadata. This includes:
•
Deduplication rate.
•
Number/sizes of optimized files.
•
Deduplication settings such as minimum file age, minimum file size, excluded
files/folders, compression-excluded file types, and the chunk redundancy threshold.
o
Get-DedupMetadata. Returns the deduplicated data store’s status for volumes that have
data deduplication metadata.
o
Get-DedupJob. Returns the deduplication status and information for currently running or
queued deduplication jobs.

Event Viewer logs. In Event Viewer, navigate to Applications and Services Logs, select
Microsoft, select Windows, and then select Deduplication.

Performance Monitor data. Run generic server performance counters, such as CPU and
memory, and typical disk counters to monitor throughput rates of jobs that are currently
running.

File Explorer. Use File Explorer to verify deduplication for individual files.
Maintain Data Deduplication
For ongoing deduplication maintenance, you can several PowerShell cmdlets, including:

Update-DedupStatus. Scans volumes to compute new Data Deduplication information for
updating the metadata.

Start-DedupJob. Use to launch ad-hoc deduplication jobs.

Measure-DedupFileMetadata. Use to measure potential disk space on a volume.

Expand-DedupFile. Expands an optimized file into its original location.
149
20740 Installation, Storage, and Compute with Windows Server
Backup and restore considerations with Data
Deduplication
Organizations that implement Data Deduplication benefit from bigger disk space savings and also
from faster backup and restore operations because space on a volume that needs to be backed
up is reduced, which means that there’s less data to back up. Furthermore, the backup data is
reduced because the total size of the optimized files, non-optimized files, and data deduplication
chunk-store files are much smaller than the volume’s logical size.
Backup operations
Note: Before implementing backup software in your organization, verify with your vendors
whether their backup software supports Data Deduplication.
With deduplication in Windows Server, you can back up and restore individual files and full
volumes and create optimized file-level backup and restore by using VSS writer. However, it’s
important to note that deduplication in Windows Server doesn’t support the backup or restore of
only reparse points and only the chunk store.
A backup application can perform an incrementally optimized backup, which means it:

Backs up only the changed files created, modified, or deleted since your last backup.

Backs up the changed chunk-store container files.

Performs an incremental backup at the subfile level.
Note: New chunks are appended to the current chunk-store container. When the container’s
size reaches approximately 1 GB, that container file is sealed, and a new container file is
created.
Restore operations
Restore operations can also benefit from Data Deduplication. Any file-level, full-volume restore
operations can benefit because they’re essentially a reverse of the backup procedure, and less
data means quicker operations. The full volume-restore process occurs in the following order:
1. Restoration of container files and all Data Deduplication metadata occurs first.
2. The complete set of Data Deduplication reparse points are restored.
3. All nondeduplicated files are restored.
Data Deduplication occurs at the file level, so if you trigger a restore process, it results in a blocklevel restore from an optimized backup automatically being an optimized restore.
150
20740 Installation, Storage, and Compute with Windows Server
Lab 4: Implement Storage Spaces
Please refer to our online lab to supplement your learning experience with exercises.
Lab 5: Implement Data Deduplication
Please refer to our online lab to supplement your learning experience with exercises.
Knowledge check
Check your knowledge by answering these questions.
1. Describe three usage scenarios of Storage Spaces.
2. List three scenarios for Data Deduplication.
3. List the components of the Data Deduplication service.
Note: To find the answers, refer to the Knowledge check slides in the accompanying
Microsoft PowerPoint presentation.
151
20740 Installation, Storage, and Compute with Windows Server
Module 5: Install and configure
Hyper-V and virtual machines
The concept of virtualization has evolved to include many aspects within the network environment.
What started off primarily as virtual machines (VMs) has expanded to include virtual networking,
virtual applications, and containers to comprise what’s referred to as a software-defined
infrastructure. As a server administrator, you must determine the server and service workloads
that might run effectively in a virtual environment and which workloads should remain in a
traditional, physical environment.
In this module, you’ll learn about key features of the Hyper-V platform and how to configure its key
components. Also, you’ll learn how to create, configure, and manage VMs.
By completing this module, you’ll achieve the knowledge and skills to:

Describe the Hyper-V platform.

Install Hyper-V.

Configure storage on Hyper-V host servers.

Configure networking on Hyper-V host servers.

Configure Hyper-V VMs.

Manage Hyper-V VMs.
Lesson 1: Overview of Hyper-V
Hyper-V was first introduced in Windows Server 2008, and with each subsequent Windows Server
release, Hyper-V has been enhanced with new features. In this lesson, you’ll learn how you can use
Hyper-V to implement virtualization, the Hyper-V key components, and how to manage Hyper-V.
Finally, you’ll learn about Windows Server containers, a new virtualization technology.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe Hyper-V.

Manage Hyper-V with Hyper-V Manager.

Describe Windows Server containers and Docker in Hyper-V.
What is Hyper-V?
Hyper-V is a server role available in Windows Server x64 operating system (OS) versions. It
leverages the underlying hardware’s capabilities to host multiple operating systems running
simultaneously but independently of each other.
152
20740 Installation, Storage, and Compute with Windows Server
When you install the Hyper-V server role it implements a software layer referred to as the
hypervisor, which is responsible for controlling access to the physical hardware. Such access is
based on the concept of partition isolation, where each individual partition hosts an OS. Hyper-V
has one designated partition known as the parent (or root) partition. The parent partition runs the
Windows Server OS and hosts the virtualization management stack. The parent partition usually
runs on a physical server.
The Hyper-V root partition provides the ability to create child partitions, which can run any OS
supported by the Hyper-V virtualization platform. These partitions are referred to as virtual
machines (VMs). The maximum number of child partitions is mostly dependent on the capacity
of the underlying physical resources, although Hyper-V in Windows Server 2022 does impose
some scalability limits:

The maximum number of running VMs is set to 1,024.

The maximum number of virtual processors is set to 2,048.

The maximum amount of memory is set to 48 terabytes (TB).
Individual VMs operate just like their physical counterparts, and the OS typically is unaware of the
abstraction layer that the hypervisor provides. Partition isolation facilitates sharing and dynamic
allocation of compute, storage, and networking resources to accommodate changing usage
patterns of virtualized workloads.
Scenarios for using Hyper-V
Hyper-V is used to support various scenarios from simple VMs to complex software-defined
infrastructures. You can use Hyper-V to:

Consolidate your server infrastructure.

Provide a virtual development or test environment.

Establish a virtual desktop infrastructure (VDI).

Implement a private cloud infrastructure.
General best practices that you should consider when provisioning Windows Server to function as
a Hyper-V host include:

Provision the host with adequate hardware.

Deploy VMs on separate disks, solid state drives (SSDs), or Cluster Shared Volumes (CSVs) if
using shared storage.

Don’t collocate other server roles.

Manage Hyper-V remotely.

Run Hyper-V by using a Server Core configuration.

Run the Best Practices Analyzer and resource metering.

Use generation 2 VMs if the guest OS supports them.
Subsequent sections of this module will provide more detail about these recommendations.
153
20740 Installation, Storage, and Compute with Windows Server
Manage Hyper-V with Hyper-V Manager
You can use Hyper-V Manager as a graphical user interface (GUI) to manage both local and remote
Hyper-V hosts. It’s available when you install the Hyper-V Management Tools, which is included in a
complete Hyper-V server role installation or which you can install as a tools-only installation.
Hyper-V Manager supports several general features, including:

Previous version support. When using Hyper-V Manager on Windows Server 2022 or Windows
11, you can still manage hosts installed with previous operating systems such as Windows
Server 2019, Windows Server 2016, or Windows Server 2012 R2.

Support for WS-Management protocol. Hyper-V Manager supports connections to Hyper-V hosts
over the Web Services Management Protocol (WS-Management Protocol), which allows Hyper-V
Manager to communicate by using the Kerberos protocol NTLM or Credential Security Support
Provider (CredSSP). By using CredSSP, you remove the need for Active Directory Domain
Services (AD DS) delegation. This makes it easier to enable remote administration because
WS-Management Protocol communicates over ports 80 or 443, which are the default open
ports.

Alternate credential support. Communicating over the WS-Management Protocol enables you
to use different credentials in Hyper-V Manager and save those credentials for ease of
management. However, alternative credentials only work with Windows 10 and newer and
Windows Server 2016 and newer hosts. Older servers installed with Hyper-V don’t support
the WS-Management Protocol for Hyper-V Manager communication.
Hyper-V Manager is the most common interface for managing VMs in Hyper-V, but there are other
tools that provide similar features for specific management scenarios, including:

Windows PowerShell. Provides PowerShell cmdlets that you can use for scripting or commandline administrative scenarios.

PowerShell Direct. Allows you to use Windows PowerShell inside a VM, regardless of the
network configuration or remote-management settings on either the Hyper-V host machine or
the VM.

Windows Admin Center. Is a browser-based application that remotely manages Windows
Servers, clusters, and Windows 10 and newer PCs.

System Center Virtual Machine Manager (SCVMM). Is part of the System Center suite, which
you can use to configure, manage, and transform traditional datacenters. It also helps provide
a unified management experience across on-premises, service providers, and the Azure cloud.
Windows Server containers and Docker in Hyper-V
As previously discussed, when you deploy Hyper-V, each VM has its own OS instance that’s entirely
independent from operating systems in other VMs, as each VM runs its own kernel. As a result, an
issue in one VM’s OS doesn’t cause issues in other VMs. This provides high levels of stability for
the VMs. However, it also uses many resources because memory and processor resources are
allocated to each individual OS that’s running in each VM.
154
20740 Installation, Storage, and Compute with Windows Server
Windows Server containers are a feature in Windows Server that allows you to run multiple
applications independently within a single OS instance, with multiple containers sharing the
operating system’s kernel. This configuration is referred to as OS virtualization, and similar to how
a VM presents hardware resources to an OS, a container is presented with a virtual OS kernel.
A container is used for packaging an application with all its dependencies and abstracting it from
the host OS in which it will run. A container is isolated from the host OS and from other containers,
and the isolated containers provide a virtual runtime, thereby improving the security and reliability
of the apps that run within them.
The benefits of using containers include:

Ability to run anywhere. Containers can run on various platforms such as Linux, Windows, and
Mac operating systems. They can be hosted on a local workstation, on servers in on-premises
datacenters, or provisioned in the cloud.

Isolation. To an application, a container appears to be a complete OS. The CPU, memory,
storage, and network resources are virtualized within the container isolated from the host
platform and other applications.

Increased efficiency. Containers can be quickly deployed, updated, and scaled to support a
more agile development, test, and production life cycle.

A consistent development environment. Developers have a consistent and predictable
development environment that supports various development languages such as Java,
.NET, Python, and Node. Developers know that no matter where the application is deployed,
the container will ensure that the application runs as intended.
Both VMs and containers are virtualization technologies that provide isolated and portable
computing environments for applications and services. Containers build upon the host operating
system’s kernel and contain an isolated user-mode process for the packaged app, which helps
make containers very lightweight and quick to launch. However, VMs simulate an entire computer,
including the virtualized hardware, OS, user mode, and its own kernel mode.
You should use a VM when you:

Need to manage several operating systems.

Need to run an app that requires all the resources and services of an entire OS, such as a GUI.

Need an environment that preserves changes and is persistent.

Require complete isolation and security.
You should use a container when you:

Need a lightweight application package that quickly starts.

Need to deploy multiple instances of a single app.

Need to run an app or process that’s nonpersistent on an on-demand basis.

Need to deploy an app that can run on any underlying infrastructure.
155
20740 Installation, Storage, and Compute with Windows Server
Docker is the management software for containers, and you can use it to retrieve containers from,
and store containers in, a repository. In certain scenarios, containers can be layered together to
provide an entire application. For example, there can be a container for the OS, a container for the
web-server software, and another container for the web-based app. In such a case, Docker can
retrieve all containers required for the app from a repository and deploy them.
Supporting the Docker container is the Docker Engine, which is the core of the Docker platform.
The Docker Engine is a lightweight runtime environment that runs on Linux, macOS, or Windowsbased operating systems.
You can use another component, called the Docker client, as a command-line interface (CLI) to
integrate with the engine and run commands for building and managing the Docker containers
provisioned on the host computer.
Docker containers are based upon open standards that allow containers to run on all major Linux
distributions and Microsoft operating systems with support for every infrastructure. Additionally,
because they’re not tied to any specific infrastructure, Docker containers can run on any
computer, infrastructure, and cloud.
Lesson 2: Install Hyper-V
Before you can implement Hyper-V, you must ensure that your servers meet the prerequisites for
installing Hyper-V. This is very important, as you can’t install the Hyper-V server role if they don’t. In
some cases, you might want to implement nested virtualization where a VM running on a Hyper-V
host can also be configured as a Hyper-V host.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe prerequisites and requirements for installing Hyper-V.

Install the Hyper-V role.

Describe the nested virtualization feature.
Prerequisites and requirements for installing Hyper-V
The ability to install Hyper-V depends on hardware support and sufficient capacity of computing
resources. At a minimum, a Hyper-V host should have the following hardware components:

A 64-bit processor with Second-Level Address Translation (SLAT).

A processor with VM Monitor Mode extensions.

A minimum of 4 gigabytes (GB) of memory.

Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V) enabled.

Hardware-enforced, Data Execution Prevention (DEP)-enabled (Intel XD bit, AMD NX bit).
156
20740 Installation, Storage, and Compute with Windows Server
Note: Windows Server contains the Systeminfo.exe command-line tool, which includes a
check that automatically determines whether the local hardware meets the Hyper-V
installation prerequisites.
As part of your Hyper-V deployment planning, you should also evaluate the resource requirements
for the virtualized workloads you intend to deploy to a given host. Besides capacity and scalability,
you should consider criteria such as resiliency and access to underlying hardware devices, and
specifically:

The number of physical processor cores. Make sure that you can allocate sufficient processing
resources to each VM.

The amount of physical memory. Ensure that there’s enough physical memory to support the
total number of VMs that you intend to run. The minimum amounts are specific to the OS
you’re using and the workloads you’re supporting. In situations where usage patterns differ
across VMs, you can utilize Hyper-V support for dynamic memory allocation.

The amount of physical storage. Ensure that there’s enough storage to accommodate the disk
space you’ll allocate to the virtual hard disks (VHDs) attached to your VMs. Additionally, you
should consider throughput and the number of input/output (I/O) operations per second (IOPS)
required to support concurrent access from multiple VMs.

Network throughput. Ensure that there’s enough available network bandwidth to accommodate
the traffic that VMs generate and receive.

Support for discrete device assignment. This includes graphics processing units (GPUs) and
non-volatile memory express (NVMe).
To install the Hyper-V server role, you can use Server Manager or the Install-WindowsFeature
cmdlet in Windows PowerShell.
Demonstration: Install the Hyper-V role
For high-level demonstration steps, refer to the Notes, which are in the accompanying Microsoft
PowerPoint presentation.
Nested virtualization overview
The term nested virtualization refers to when the Hyper-V server role is installed within a VM’s OS,
which enables you to provision “nested” VMs within that VM. While this is rarely suitable in
production environments, it’s quite useful for development and testing.
When implementing nested virtualization, there are additional requirements beyond the
prerequisites for traditional Hyper-V server role installations. However, physical processor
requirements are vendor dependent. For example:

When using an Intel processor with VT-x and EPT technology, the Hyper-V host must be running
Windows Server 2016 or newer, and the VM configuration must be version 8.0 or newer.
157
20740 Installation, Storage, and Compute with Windows Server

When using AMD EPYC or AMD Ryzen processors or newer, the Hyper-V host must be Windows
Server 2022, and the VM configuration must be version 10.0 or newer.
Note: You’ll learn about the VM configuration versions later in this module.
Before you configure nested virtualization in a VM, you need to enable virtualization extensions for
its virtual processor. To accomplish this, ensure that the VM is stopped, and then from the Hyper-V
host, run the following Windows PowerShell command, (where <vmname> is the placeholder
designating the name of the target VM):
Set-VMProcessor -VMName <vmname> -ExposeVirtualizationExtensions $true
You also need to consider the number of virtual central processing units (vCPUs) assigned to the
VM, the amount of memory allocated to it, and its network configuration. While 4 GB of RAM
should suffice to install the Hyper-V server role, the optimal amount will depend on the planned
number of nested VMs and their memory demands.
Note: Dynamic memory and runtime memory resize aren’t available on VMs running Hyper-V.
From the networking standpoint, establishing connectivity between nested VMs and external
networks typically involves enabling the MAC Address Spoofing feature on the network adapter
of the VM configured as the Hyper-V host. You can perform this task directly from the Hyper-V
Manager console or by using the Set-NetworkAdapter PowerShell cmdlet. This feature isn’t
necessary to allow connectivity between nested VMs.
Alternatively, you can provide connectivity between nested VMs and external networks by
implementing network address translation (NAT) in the VM serving the role of the Hyper-V
host. This is a common approach in environments where access to the Hyper-V host isn’t
available, such as Azure.
After you have enabled virtualization extensions and satisfied all other prerequisites, the rest of
the setup process is the same as when you configure Hyper-V on a physical host.
Lesson 3: Configure storage on Hyper-V
host servers
Similar to a physical computer that needs a hard disk for storage, VMs also require storage.
Instead of physical disks, VMs use VHDs, which can be in .vhd or the newer .vhdx format. There
are also other types of VHDs such as fixed-size and dynamically expanding. You need to know
when it’s appropriate to use the various formats and types of VHDs. You also need to understand
the options for storing VHDs so that you can select an option that meets your requirements for
performance and high availability.
158
20740 Installation, Storage, and Compute with Windows Server
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe storage options in Hyper-V.

Describe considerations for VHD formats and types.

Describe Fibre Channel support in Hyper-V.

Choose where to store VHDs.

Explain how to store VMs on Server Message Block (SMB) 3.0 shares.

Manage storage in Hyper-V.
Storage options in Hyper-V
A VHD is a special file format that represents a traditional hard disk drive. Inside a VHD, you can
configure partitions, files, and folders, just as you do with physical hard disks. VMs use VHDs for
their storage.
You can create VHDs by using:

The Hyper-V Manager console.

The Disk Management console.

The Diskpart command-line tool.

The New-VHD Windows PowerShell cmdlet.
VHDs can be created as .vhd or .vhdx files.
.vhd
The .vhd format provides the necessary functionality for VM storage, but it lacks several
performance-optimization and resiliency features that exist in the other two formats. Additionally,
its maximum disk size is limited to 2 TB. All Windows Server versions support this format.
.vhdx
The .vhdx format was introduced in Windows Server 2012, and it was enhanced in Windows Server
2016 and newer versions. It provides the following advantages compared to the .vhd format:

Has a maximum disk size of 64 TB.

Offers built-in protection helps avoid data corruption resulting from an unexpected power
outage.

Larger block sizes on dynamic and differential disks result in performance improvements.

More efficient data representation leads to smaller file sizes along with a mechanism that
facilitates reclaiming unused disk space.
Based on these benefits, the .vhdx format is recommended when implementing VM disk storage
on Windows Server 2016 or newer.
159
20740 Installation, Storage, and Compute with Windows Server
Note: Microsoft supports conversion between .vhd and .vhdx formats. Converting formats
results in a new disk with content copied from the source disk. Before you run a conversion,
ensure that you have sufficient disk space to complete the conversion.
Windows Server 2016 and later also introduced the .vhds format, which is optimized for
sharing disks among multiple VMs. This format is commonly used to provide shared storage
for VMs operating as nodes in a failover cluster.
VHD types
Windows Server supports multiple VHD types as well. Each has its own set of strengths and
optimal-use cases:

Fixed size. Fixed-size VHDs preallocate the underlying storage equal to the full size of the disk
during disk creation. The primary benefit of this approach is that fragmentation is minimized,
which in turn enhances performance. Another benefit is a minimized CPU overhead associated
with reads and writes, because the need for metadata lookups, which are required when using
other disk types, is eliminated.

Dynamically expanding. Dynamically expanding VHDs allocate the underlying storage as
required, which optimizes its usage The .vhdx disk format also has the ability to dynamically
shrink if you remove data and stop the VM. However, writing data to a dynamically expanding
disk requires metadata updates and subsequent lookups, which increases CPU utilization.

Differencing. Differencing VHDs rely on a parent disk for any existing read-only content. Writes
are allocated to a dynamically expanding disk. Subsequent reads are serviced from the same
disk, which introduces a CPU overhead required to look up the mapping of data blocks.
However, in scenarios where the multiple VMs rely on the same parent disk, this approach can
lead to significant space savings.

Pass through. Pass-through VHDs map directly to a physical disk or an Internet Small Computer
System Interface (iSCSI) or Fibre Channel logical unit number (LUN) rather than a virtual disk
file. While this tends to improve the impact of storage I/O on CPU usage, it also introduces
several limitations in VM migration scenarios.
Considerations for VHD formats and types
Consider the following factors when choosing virtual disk formats and types:

When running Hyper-V on Windows Server 2016 or later, we recommend using VHDs in the
.vhdx format.

Dynamically expanding VHDs in the .vhdx format offer almost the same level of performance
as those in the .vhd format, and are supported for production workloads.

Dynamically expanding VHDs report free space availability based on the maximum size
set during their creation, not the actual amount of underlying storage available. Therefore,
it’s important to monitor the availability of physical storage when using dynamically
expanding VHDs.
160
20740 Installation, Storage, and Compute with Windows Server

While it’s possible to link multiple differencing disks, the resulting performance decreases as
the number of linked disks increases.

Any modification to a parent VHD automatically invalidates all corresponding differencing
disks.
Fibre Channel support in Hyper-V
When using the Hyper-V platform in enterprise environments, there might be a requirement to
enable access to Fibre Channel storage from VMs deployed on Hyper-V.
Hyper-V Virtual Fibre Channel is a virtual adapter that, when added to a VM, provides direct access
to Fibre Channel and Fibre Channel over Ethernet (FCoE) storage LUNs. The direct access implies
that I/O operations bypass the root partition, which minimizes their impact on the CPU utilization.
This is particularly beneficial when dealing with large data drives and drives shared across multiple
VMs in guest clustering scenarios.
Implementing Virtual Fibre Channel requires:

One or more Fibre Channel host bus adapters (HBAs) attached to the Hyper-V host.

A driver for the physical HBA that supports Virtual Fibre Channel.

A VM OS that supports VM extensions.
Virtual Fibre Channel adapters support port virtualization by making the HBA ports accessible from
the VM’s guest OS. This allows the guest OS to access the Fibre Channel storage area network
(SAN) by using a designated World Wide Names (WWN).
To maximize VM throughput on hosts with multiple HBAs, consider configuring multiple virtual
HBAs (up to the supported limit of four per VM).
Where to store VHDs?
Choosing the optimal storage mechanism for a VM’s virtual disk files is essential to its
performance, stability, and resiliency. In general, you can use locally attached disks, a SAN,
or network-attached storage (NAS), including SMB 3 file shares.
Consider the following factors when planning the location of your VHD files:

Highly performant connectivity. Determine the baseline bandwidth and latency characteristics
between the Hyper-V host and the physical storage under the expected load conditions.
Correlate the results with the virtualized workloads and their throughput and latency
requirements.

Highly performant storage. Identify the performance characteristics of the underlying storage,
including throughput, IOPS, and latency. Consider utilizing such technologies as SSDs, NVMe
storage, and caching.
161
20740 Installation, Storage, and Compute with Windows Server

Redundant connectivity. Ensure that connections to the underlying storage are highly available.
This is particularly critical when using remote storage. Evaluate the feasibility of using such
connectivity technologies as Multipath I/O (MPIO) and SMB Multichannel.

Redundant storage. Ensure that volumes hosting virtual disk files are fault tolerant. Consider
using the high availability provided by software-defined storage technologies such as Storage
Spaces, Storage Spaces Direct, CSV, and Scale-Out File Server.

Capacity and scaling. Account for the growth projections and expansion capabilities for future
storage needs. Monitor storage usage and set up alerting to notify you when available disk
space drops under what you consider to be a safe threshold.
Store VMs on SMB 3.0 shares
Hyper-V supports storing VM files (including virtual disks, configuration files, and checkpoints) on
SMB 3.0 (or newer) shares. Such capability is available in both standalone and clustered fileserver scenarios. The latter offers an economically viable, resilient, and performant alternative to
iSCSI or Fibre Channel SAN devices.
SMB is a protocol used by Windows Server–based Software Defined Data Center (SDDC) solutions
including Storage Spaces Direct and Storage Replica. SMB version 3.0 was introduced in Windows
Server 2012 and has been incrementally enhanced in subsequent OS releases.
To implement VMs with their disks residing on an existing SMB 3 share, provide the target share
path when specifying the location of the VM disk files.
Note: Ensure that the available bandwidth of the connection to the file share is 1 gigabit per
second (Gbps) or higher.
To enhance resiliency and performance of SMB file shares in failover clustering scenarios,
combine the benefits of the Scale-Out File Server cluster role, SMB Multichannel, and SMB Direct.
Scale-Out File Server offers highly available and load-balanced access to clustered file shares,
with all cluster nodes simultaneously processing read and write requests. SMB Multichannel
automatically identifies redundant paths between the Hyper-V cluster nodes and clustered file
servers, and then routes the storage traffic across them in the optimal manner. SMB Direct further
boosts performance by leveraging capabilities built into Remote Direct Memory Access (RDMA)
devices.
Windows Server also supports Storage Quality of Service (QoS) to control bandwidth usage
between Hyper-V and Scale-Out File Server clusters.
Demonstration: Manage storage in Hyper-V
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
162
20740 Installation, Storage, and Compute with Windows Server
Lesson 4: Configure networking on Hyper-V
host servers
For the VMs to operate in production environments, it’s critical that they have a connection to
your network. Hyper-V supports a wide variety of network configurations, represented with virtual
switches. Each type of network is appropriate for specific types of scenarios. For example, an
external network provides access to the physical network, but private networks are used to isolate
hosts in a test environment. There are also new features for Hyper-V networking, such as switchembedded teaming.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe types of Hyper-V virtual switches.

Configure Hyper-V networks.

Describe best practices for configuring Hyper-V virtual networks.

Describe advanced networking features in Windows Server Hyper-V.
Types of Hyper-V virtual switches
A Hyper-V virtual switch is a software-based, layer-2 Ethernet network switch whose functionality
becomes available when you install the Hyper-V server role. Similar to its physical counterparts,
the Hyper-V virtual switch’s purpose is to provide the ability to establish connectivity between
different networking endpoints. In the case of a Hyper-V switch, these endpoints can include the
local Hyper-V host, its VMs, and physical networks to which the Hyper-V host is connected.
Hyper-V supports the three types of virtual switches, which Table 1 lists:
Table 13: Virtual switches
Type
Description
External
External virtual switches provide connectivity between the VMs on the Hyper-V host,
between the VMs and the Hyper-V host itself, and between the VMs and a physical
network. Connectivity relies on a wired or wireless network adapter attached to the
Hyper-V host. The adapter can be shared with the host or used exclusively by the VMs
connected to the switch.
Internal
Internal virtual switches provide connectivity between the VMs on the Hyper-V host, and
between the VMs and the Hyper-V host itself. Connectivity doesn’t rely on a physical
network adapter.
Private
Private virtual switches provide connectivity between the VMs on the Hyper-V host. Like
internal virtual switches, connectivity doesn’t rely on a physical network adapter.
163
20740 Installation, Storage, and Compute with Windows Server
When configuring virtual switches, you can also assign a virtual local area network (VLAN) ID to
associate with the management OS. When used with an external switch, you can extend existing
VLANs on an external network to include VMs connected to the same switch.
Note: You can use an internal switch to implement NAT for VMs on a Hyper-V host. NAT maps
an external IP address assigned to the physical network adapter of the host to a set of
internal IP addresses assigned to VMs. As a result, NAT provides VMs with access to external
network resources without exposing them directly via an external IP address.
Virtual switches also support extensions that further enhance their capabilities, including:

Network Driver Interface Specification (NDIS) Capture. This extension enables you to capture
network traffic that traverses the virtual switch.

Windows Filtering Platform. This extension enables you to filter traffic that traverses the virtual
switch.
Demonstration: Configure Hyper-V networks
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
Best practices for configuring Hyper-V virtual networks
As a best practice, when configuring Hyper-V virtual networks, you need to consider performance,
resiliency, scalability, and manageability. Specifically, you should consider:

NIC teaming. Ensure that each Hyper-V host has multiple physical network adapters and uses
Switch Embedded Teaming (SET) to implement a highly available and highly performant
configuration. This will minimize the potential impact if an individual network adapter fails.
Similarly, configure redundant physical network paths between the consumers of virtualized
services and your virtualized infrastructure. If you decide to implement NIC teaming within
VMs, use load balancing and failover (LBFO).

Bandwidth management. Use QoS to control a minimum and maximum bandwidth allocation
for Hyper-V VM network interfaces. This helps to minimize the impact of changes in network
usage patterns across VMs on the same Hyper-V host.

Performance optimization. Verify that you benefit from built-in performance enhancement
technologies such as Dynamic Virtual Machine Multi Queues (VMMQ). In production scenarios
that require high throughput and low latency, implement RDMA and SMB Direct. Also, it is
recommended that you use network adapters that support Virtual Machine Queue (VMQ).

Network virtualization. Evaluate the benefits of Windows Server Software Defined Networking
(SDN) and determine whether they justify switching from the traditional VLAN-based approach.
SDN tends to be more complex to configure than VLANs. However, it offers superior scalability
and enables you to implement network isolation while minimizing dependencies on a physical
network infrastructure. You also have the option of integrating SDN with VLANs.
164
20740 Installation, Storage, and Compute with Windows Server
Advanced networking features in Windows Server Hyper-V
Windows Server offers a wide range of features that further enhance the core functionality
built into the Hyper-V networking model. The majority of these features increase networking
performance by minimizing the utilization of server-processing resources and maximizing network
connectivity reliability and resiliency, and they also tend to minimize the administrative overhead
required to implement and maintain them. These Windows Server features include QoS, Dynamic
VMMQ, RDMA, and SET.
QoS
QoS is a set of networking and storage technologies that provide the ability to control the flow
of network traffic based on its characteristics, resulting in optimized system functionality and
workload performance. QoS also facilitates implementing provisions that allow you to satisfy
performance requirements of workloads that rely on shared infrastructure. To accomplish
this, QoS:

Monitors and measures the utilization of shared resources.

Detects changing resource utilization.

Prioritizes or throttles designated types of workloads, depending on a configuration you
specify.
For example, to address temporary network congestion, QoS would prioritize traffic for latencysensitive services by throttling latency-insensitive ones.
Dynamic VMMQ
Dynamic VMMQ (also known as d.VMMQ) is a feature of the Hyper-V network stack. It dynamically
manages the processing of VM network traffic in response to changes in network conditions and
utilization of system resources. Dynamic VMMQ optimizes the Hyper-V host efficiency and
accommodates seamless processing of burst workloads.
When network throughput is low, Dynamic VMMQ coalesces traffic received on a VM network
adapter to utilize as few processors as possible. This is called queue packing, and it’s beneficial
from the compute standpoint because there’s an overhead associated with managing packet
distribution across multiple processors. To proactively prepare for sudden bursts of network traffic,
Dynamic VMMQ also preallocates resources to idle workloads, which is called queue parking.
Dynamic VMMQ is enabled by default; it doesn’t require any explicit setup beyond installing a
supporting driver. It will autotune the existing workload to maintain optimal throughput for
each VM.
165
20740 Installation, Storage, and Compute with Windows Server
RDMA support for virtual switches
RDMA is a networking technology that provides high-throughput, low-latency communication to
minimize CPU usage. RDMA accomplishes this by leveraging the capabilities of physical network
adapters to perform direct data transfers to and from a computer’s memory without relying on the
CPU hardware.
The original implementation of RDMA in Windows Server 2012 supported RDMA only when using
physical adapters without a virtualization layer. Windows Server 2016 extends this support to
Hyper-V external switches associated with RDMA adapters. You could also use SET to combine
multiple RDMA adapters into a team for bandwidth aggregation and failover protection. Starting
with Windows Server 2019, you can configure RDMA in Hyper-V VMs.
Note: Windows Server 2022 introduces another performance optimization technology
referred to as Receive Segment Coalescing (RSC) built into Hyper-V switches. Its role is to
coalesce incoming network packets and process them together as a single segment. This
increases efficiency of traffic delivery for external, internal, and private switches.
SET
Most production deployments require a degree of high availability and performance provisions.
These provisions apply to every infrastructure component, including network connectivity. To
account for such requirements, server hardware typically includes multiple physical network
adapters. To deliver the expected functionality, such adapters are then configured as network
interface card (NIC) teams through software or firmware to provide failover and load-balancing
functionality.
Microsoft introduced built-in NIC teaming in Windows Server 2012 in the form of the load
balancing and failover (LBFO) OS component. LBFO enables the configuration of up to 32 network
adapters as a team, with automatic failover and bandwidth aggregation. Adapters can be of a
different make and model and support different network speeds.
LBFO-based teaming is dynamic. This means that it continuously monitors network data flows
across all team members and redistributes them if needed to maximize the load-balancing
behavior.
Starting with Windows Server 2016, Microsoft also provides an alternative teaming solution in
the form of SET. SET integrates the NIC teaming functionality directly into the Hyper-V virtual
switch. It also enables grouping between one and eight physical Ethernet network interfaces
into a single team.
SET architecture doesn’t expose individual team interfaces. Instead, the Hyper-V switch ports
provide automatic load balancing and fault tolerance in the event of a network interface failure.
166
20740 Installation, Storage, and Compute with Windows Server
SET requires all network interfaces that are members of the same team to have a matching
manufacturer, model, firmware, and driver. However, it offers better stability and performance
than LBFO. It also addresses several limitations of LBFO that stem from its incompatibility with
more recently developed networking technologies such as RDMA and Dynamic VMMQ.
You must create a SET team at the same time that you create the Hyper-V virtual switch. You
can accomplish this by using the New-VMSwitch PowerShell cmdlet with the
EnableEmbeddedTeaming parameter, as in the following example:
New-VMSwitch -Name TeamedvSwitch -NetAdapterName "NIC 1","NIC 2" EnableEmbeddedTeaming $true
Lesson 5: Configure Hyper-V VMs
After you installed the Hyper-V host and configured storage and networks for VMs, you can begin to
create and configure VMs. When you move VMs from older Hyper-V hosts to Windows Server 2022,
you need to be aware of VM configuration versions and how to update them. You also must be
aware of the differences between Generation 1 and Generation 2 VMs. Also, you need to be
familiar with hot-adding hardware features and ways in which to help protect VMs.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe VM configuration versions.

Describe VM generation versions.

Create a VM.

Describe the Hot Adding feature in Hyper-V.

Describe shielded VMs.

Describe VM settings.

Describe best practices for configuring VMs.
What are VM configuration versions?
The configuration version of a Hyper-V VM determines its capabilities. It also affects the range of
options you can use to configure its settings, saved state, and checkpoints. The initial value of the
configuration version is set during VM creation. By default, it depends on the OS of the underlying
Hyper-V host.
Prior to Windows Server 2016 Hyper-V, upgrading the OS would automatically result in upgrading
the configuration version of all VMs on that host. However, starting with Windows Server 2016
(and with all subsequent operating systems), you must explicitly invoke a configuration-version
upgrade. This behavior provides more administrative control over the upgrade process and
facilitates a failback if issues arise.
167
20740 Installation, Storage, and Compute with Windows Server
Table 2 is a list of VM configuration version numbers and the corresponding Hyper-V host OS
versions that support them:
Table 14: VM configuration versions
Windows version of the Hyper-V
host
Configuration versions
Windows Server 2022
10.0, 9.3, 9.2, 9.1, 9.0, 8.3, 8.2, 8.1, 8.0
Windows Server 2019
9.0, 8.3, 8.2, 8.1, 8.0, 7.1 ,7.0 ,6.2, 5.0
Windows Server 2016
8.0, 7.1 ,7.0 ,6.2, 5.0
Windows Server 2012 R2
5.0
Identify the VM configuration version
To identify the configuration version of VMs on a Hyper-V host, run the following PowerShell
command from the host OS:
Get-VM * | Format-Table Name, Version
Update a single VM
To update the configuration version of a VM, run the following PowerShell commands from the
host OS, where <vmname> is the placeholder that designates the target VM’s name:
Stop-VM -Name <vmname>
Update-VMVersion <vmname> –Confirm $false
Start-VM -Name <vmname>
Note: The VM must be in the stopped state for the update to succeed.
Updating a VM configuration version automatically sets it to the highest value supported by the
Hyper-V host OS. For example, if you update the configuration version of a VM on a Hyper-V host
running Windows Server 2022, then the configuration version is updated to version 10.0.
168
20740 Installation, Storage, and Compute with Windows Server
Update all VMs on all cluster nodes
To update the configuration version of all VMs on a Hyper-V cluster, run the following PowerShell
command from any of the cluster nodes:
Get-VM –ComputerName (Get-Clusternode) | Stop-VM
Get-VM –ComputerName (Get-Clusternode) | Update-VMVersion –Confirm $false
Get-VM –ComputerName (Get-Clusternode) | Start-VM
VM generation versions
Windows Server 2022 Hyper-V supports two generations of VMs: Generation 1, and Generation 2.
The generation of a VM determines the virtual hardware and functionality available to the VM.
Generation 2 VMs use a different virtualized hardware model. As a result, they no longer support a
number of legacy devices such as COM ports, the emulated floppy disk drive, and IDE controllers.
Generation 2 VMs also use Unified Extensible Firmware Interface (UEFI) firmware, unlike
Generation 1 VMs, which rely on BIOS.
You must choose the generation of a VM during its provisioning. You can deploy and run a mix of
Generation 1 and Generation 2 VMs on the same Hyper-V host.
Generation 2 VMs offer several advantages over their Generation 1 counterparts, including:

Secure boot leveraging UEFI firmware support.

Boot from a VHD connected to a virtual Small Computer System Interface (SCSI) controller.

Boot from a virtual DVD connected to a virtual SCSI controller.

Pre-Boot Execution Environment (PXE) boot by using a synthetic Hyper-V network adapter.

Larger boot volume of up to 64 TB when using the .vhdx file format.

Slightly shorter boot and installation times.
Because of the Generation 2 VM enhancements, you should use it, but only if your guest OS
supports it. Also, because Generation 2 VMs rely on UEFI firmware rather than BIOS, they don’t
support 32-bit operating systems.
Demonstration: Create a VM
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
169
20740 Installation, Storage, and Compute with Windows Server
VM settings
In Hyper-V Manager, VM settings are grouped into two main sections: Hardware and Management.
The configuration files that store hardware and management information are separated into two
formats: .vmcx and .vmrs. The .vmcx format is used for configuring VMs, and the .vmrs format is
used for runtime data. This helps decrease the chance of data corruption during storage failure.
VMs use simulated hardware, and Hyper-V uses this virtual hardware to mediate access to actual
hardware. Depending on the scenario, you might not need to use all available simulated hardware.
Generation 1 VMs have the following hardware by default:

BIOS. Virtual hardware that simulates a computer’s BIOS. You can configure a VM to switch
Num Lock on or off. You can also choose the startup order for a VM’s virtual hardware and
start a VM from a DVD drive, an IDE device, a legacy network adapter, or a floppy disk.

Memory. You can allocate memory resources to a VM. An individual VM can allocate as much
as 1 TB of memory. You can also configure the Dynamic Memory feature to allow for dynamic
memory allocation based upon resource requirements.

Processor. You can allocate processor resources to a VM. You can allocate up to 64 virtual
processors to a single VM.

IDE controller. A VM can support only two IDE controllers and, by default, allocates two IDE
controllers to a VM: IDE controller 0 and IDE controller 1. Each IDE controller can support two
devices. You can connect VHDs or virtual DVD drives to an IDE controller. The boot device must
be connected to an IDE controller if starting from a hard-disk drive or DVD-ROM. IDE controllers
are the only way to connect VHDs and DVD-ROMs to VMs that use operating systems that don’t
support integration services.

SCSI controller. You can use SCSI controllers on VMs you deploy with operating systems
supporting integration services. SCSI controllers allow you to support up to 256 disks by using
four controllers with a maximum of 64 connected disks each. You can add and remove virtual
SCSI disks while a VM is running.

Network adapter. Network adapters specific to Hyper-V represent virtualized network adapters.
You can only use network adapters with supported VM guest operating systems that support
integration services.

COM port. A COM port enables connections to a simulated serial port on a VM.

Diskette drive. You can map a .vfd floppy disk file to a virtual floppy drive.
Generation 2 VMs have the following hardware by default:

Firmware. UEFI allows all the features of the BIOS in Generation 1 VMs. However, it also allows
secure boot, which is enabled by default.

Memory. Same as Generation 1 VMs.

Processor. Same as Generation 1 VMs.

SCSI controller. Generation 2 VMs can use a SCSI controller for a boot device.

Network adapter. Generation 2 VMs support hot add or removal of virtual network adapters.
170
20740 Installation, Storage, and Compute with Windows Server
You can add the following hardware to a VM by editing its properties, and then selecting Add
Hardware:

SCSI controller. You can add up to four virtual SCSI devices. Each controller supports up to
64 disks.

Network adapter. A single VM can have a maximum of eight Hyper-V–specific network
adapters.

Fibre Channel adapter. This adapter allows a VM to connect directly to a Fibre Channel storage
area network (SAN). For this adapter, the Hyper-V host should have a Fibre Channel host-bus
adapter (HBA) with a Windows Server driver that supports virtual Fibre Channels.
Integration services
Hyper-V technology includes the ability for VMs to interact with their host by using software
components referred to as integration services. Integration services enhance the Hyper-V
management capabilities, some of which are critical for VMs to operate in a stable manner.
Commonly used integration services include:

Hyper-V Heartbeat Service. Allows the host to verify that a VM is running.

Hyper-V Guest Shutdown Service. Allows initiating shutdown of the VM OS from the host.

Hyper-V Time Synchronization Service. Synchronizes the VM clock with the host clock.

Hyper-V Guest Service Interface. Facilitates copying files from the host to a VM.

Hyper-V PowerShell Direct Service. Makes it possible to initiate a PowerShell session from the
host to a VM without network connectivity between the two.
You can selectively enable or disable each individual integration service, although typically you
benefit from having them enabled. In Windows-based guest operating systems, integration
services are implemented as Windows services.
Smart Paging
Windows Server Hyper-V supports Smart Paging. This special technology helps you address issues
resulting from not enough physical memory being available when starting VMs. (Commonly, this
amount is higher than what’s required to run VMs.) If this occurs, Smart Paging supplements the
temporary memory shortage by using a local disk of the Hyper-V host. Smart Paging leverages the
traditional disk paging mechanism.
Smart Paging is meant to provide a workaround to address transient memory availability issues. It
isn’t meant to be a solution for insufficient physical memory when running virtualized workloads.
Using Smart Paging extensively is bound to negatively impact VM performance, because read and
write operations targeting a disk are considerably slower than equivalent operations using physical
memory.
171
20740 Installation, Storage, and Compute with Windows Server
Resource metering
Resource metering provides a way to track the following Hyper-V VM metrics:

Average CPU utilization.

Average, minimum, and maximum memory utilization.

Maximum amount of allocated disk space.

Incoming and outgoing network traffic for a network adapter.
Tracking these metrics facilitates the billing model based on resource use. It allocates operational
and infrastructure costs based on virtualized resource use and not a flat fee per VM. Capturing
and reviewing these metrics over time can also help you discover usage trends and help with
capacity planning.
Discrete device assignment
Discrete device assignment provides VMs with direct access to PCIe devices attached to the
Hyper-V host. Most commonly, this functionality targets two device classes: NVMe storage and
GPU-accelerated graphics adapters.
The Hot Adding feature in Hyper-V
There’s a growing number of tasks that you can perform online without affecting the availability of
Hyper-V VMs. Starting with Windows Server 2016, these tasks include the ability to add memory
and network adapters.
The hot add memory allows for adjusting the amount of memory assigned to a running VM. This
accommodates scenarios in which the use of dynamic memory is suboptimal.
Dynamic memory allows you to define the minimum and maximum amount of memory that a VM
can use. The actual value depends on the current memory demand from workloads running within
the VM, and changes dynamically as this demand fluctuates.
Some workloads, such as Microsoft SQL Server or Microsoft Exchange Server, tend to preallocate
all available memory for their use, which typically defeats the purpose of using dynamic memory.
For these types of workloads, you might consider using hot add memory instead.
The hot-add network adapters feature allows you to change the number of network adapters
attached to a running VM, and it’s limited to Generation 2, unlike the hot add memory feature,
which is available in both generations of VMs.
172
20740 Installation, Storage, and Compute with Windows Server
Shielded VMs
In a common Hyper-V deployment, Hyper-V administrators have full access to all VMs. In some
cases, there might be application administrators that have access only to some VMs, but the
administrators for the Hyper-V hosts have access to the entire system. While this might be
convenient from an administrative viewpoint, it’s a security risk, as the contents of VMs could
be accessed by an unauthorized Hyper-V administrator or someone who gains access to the
Hyper-V host.
Shielded VMs is a feature that helps you enhance security for VMs. A shielded VM is BitLockerencrypted to help protect the data in case the virtual hard drive is accessed directly. The Guardian
Host Service controls the keys for decrypting the virtual hard drive.
To create a shielded VM, you need to create a Generation 2 VM. Also, it must include the virtual
trusted platform module (TPM). The virtual TPM is software-based and doesn’t require a hardware
TPM to be present in the server.
When you encrypt a VM, the data inside it is fully protected when the VM shuts down. If someone
copies a VHD and takes it offsite, it can’t be accessed. Hyper-V administrators can still perform
maintenance on the Hyper-V hosts, but they won’t be able to access the VM’s data.
To implement shielded VMs, you implement a guarded fabric, which requires a Host Guardian
Service. The Host Guardian Service runs on a Windows Server cluster and controls access to the
keys that allow the shielded VMs to be started. A shielded VM can be started only on authorized
hosts.
A shielding data file, also known as a provisioning data file or PDK file, is an encrypted file that a
tenant or VM owner creates to help protect important VM configuration information, such as the
administrator password, Remote Desktop Protocol (RDP), and other identity-related certificates,
domain-join credentials, etc. A fabric administrator uses the shielding data file when creating a
shielded VM but is unable to review or use the information contained in the file.
There are two attestation modes that the Host Guardian Service can use to authorize hosts:

Admin-trusted attestation. Computer accounts for trusted Hyper-V hosts are placed in an
AD DS security group. This is simpler to configure but has a lower level of security.

TPM-trusted attestation. Trusted Hyper-V hosts are approved based on their TPM identity. This
provides a higher level of security but is more complex to configure. Hosts must have TPM 2.0
and UEIF 2.3.1 with secure boot enabled.
Note: You can change the attestation mode. An initial deployment can use admin-trusted
attestation, and then you can introduce TPM-trusted attestation when all hosts have a TPM.
173
20740 Installation, Storage, and Compute with Windows Server
We recommend TPM-trusted attestation because it offers stronger assurances, but it requires that
your Hyper-V hosts have TPM 2.0. If you currently don’t have TPM 2.0 or any TPM, you can use host
key attestation. If you decide to move to TPM-trusted attestation when you acquire new hardware,
you can switch the attestation mode on the Host Guardian Service with little or no interruption to
your fabric.
Learn more: To learn more about shielded VMs, refer to Guarded fabric and shielded VMs
overview.
Best practices for configuring VMs
When configuring Hyper-V VMs, keep the following best practices in mind:

Configure the memory allocation model according to VM workloads. Use either dynamic
memory or hot add memory functionality for server-side applications such as SQL Server or
Exchange Server. In either case, set limits that account for the minimum memory requirements
while minimizing the potential impact on other VMs running on the same host.

Avoid using differencing disks for production workloads. Although differencing disks do reduce
disk space usage, they typically provide suboptimal performance because multiple VMs share,
and simultaneously access, the same parent VHD file.

Use multiple Hyper-V–specific network adapters. Connect them to different external virtual
switches. Configure VMs to use multiple virtual network adapters that are connected to host
network adapters, which in turn are connected to separate physical switches. This means that
network connectivity is retained if a network adapter or a switch fails.

Store VM files on their own volumes if you aren’t using shared storage. This minimizes the
chances of one VM’s VHD growth affecting other VMs on the same server.
Lesson 6: Manage Hyper-V VMs
After you create VMs, you need to understand how to manage them. Managing VMs includes
identifying VM states so that you can know their status. You also need to understand how you can
use checkpoints to capture the state of a VM at a point in time, for later recovery. To back up and
migrate VMs, you can export and import VMs. Finally, you can use PowerShell Direct to manage the
OS in a VM when there’s no network connectivity to the VM.
By completing this lesson, you’ll achieve the knowledge and skills to:

Manage the VM state.

Manage checkpoints.

Create checkpoints.

Describe how to import and export VMs.

Describe PowerShell Direct.

Use PowerShell Direct.
174
20740 Installation, Storage, and Compute with Windows Server
Manage the VM state
It’s important to understand how the state of a VM impacts the resources that it’s using so that
you ensure your Hyper-V host has sufficient resources to support the VMs that reside on it.
Throughout its lifetime, a VM can be placed in several different states, which affects its resource
usage and availability.
VM states include:

Off. In this state, the VM doesn’t use any memory or processing resources.

Starting. In this state, the resources required to place the VM in the running state are being
allocated to it.

Running. In this state, the OS running in the VM actively uses its resources.

Paused. In this state, the VM doesn’t consume any processing capacity, but it does retain the
memory that it’s been allocated.

Saved. In this state, the VM doesn’t consume any memory or processing resources. The
memory content that existed prior to completing the save operation is saved as a local file on
the Hyper-V host and is restored into memory when the VM is restarted.
Manage checkpoints
One of the significant advantages of virtualization is the ability to save the current state of VMs
almost instantly. On the Hyper-V platform, this capability is referred to as checkpoints.
Note: While originally this capability was referred to as snapshots, the terminology was
changed starting with Windows Server 2012 R2. You might still encounter the term snapshot
when dealing with checkpoints, such as when running PowerShell checkpoint-related
cmdlets such as Export-VMSnapshot.
When you create a checkpoint, Hyper-V creates a differencing disk in the form of a .avhd file. The
disk stores changes from the previous, most recent checkpoint, or in the case of an initial
checkpoint, from the parent VHD.
Windows Server 2016 introduced two types of checkpoints: standard and production. Windows
Server 2022 still uses these types of checkpoints.

Standard checkpoints. These types of checkpoints contain the state of a VM’s disks and
memory at the point in time when the checkpoint was initiated. Using such checkpoints for
restoring the content of the VM might result in data consistency issues, and stateful distributed
workloads that have their internal change tracking mechanism are particularly vulnerable.
175
20740 Installation, Storage, and Compute with Windows Server

Production checkpoints. These rely on Volume Shadow Copy Service (VSS) on Windows and
File System Freeze on Linux. These software components attempt to quiesce all workload
activity prior to creating a checkpoint, which makes such checkpoints suitable for a reliable
restore. To restore a production checkpoint, the VM must be turned off. This checkpoint type is
used by default.
You can create a checkpoint from the Hyper-V Manager console or the Actions pane of Virtual
Machine Connection. Hyper-V supports up to 50 checkpoints per VM. When creating a checkpoint,
be sure to determine whether the target VM has any external dependencies that might lead to data
integrity issues when the checkpoint is restored. If such dependencies exist, consider creating
checkpoints for that VM and their dependencies simultaneously.
The most common checkpoint management tasks involve applying, renaming, deleting, and
exporting checkpoints.
Applying a checkpoint reverts the VM’s status to the point at which the checkpoint was created.
Any existing points are preserved. You also have the option of creating a checkpoint that captures
the current state of the VM before you revert to an earlier checkpoint.
Renaming a checkpoint allows you to apply a meaningful naming convention that describes the
corresponding VM state. Checkpoint names are limited to 100 characters.
If you decide to delete a checkpoint, its content is either discarded or merged into the previous
checkpoint or parent VHD. For example, if you delete the most recent checkpoint, its content is
discarded. However, if you delete the previous checkpoint, the content of the corresponding
differencing VHD is merged with its parent to preserve the integrity of the most recent checkpoint.
Exporting a checkpoint produces a fully functional export, which you can move and import to
another location. This represents a viable backup method, allowing you to restore a VM from a
checkpoint.
Keep in mind that checkpoints aren’t meant to replace backups. Their availability is contingent on
having access to the parent disk because they use differential disk format. If the parent disk is lost
or its data gets corrupted, differential disks don’t provide any value.
One way to facilitate the ability to recover a VM from a checkpoint is to export that checkpoint.
Hyper-V then creates VHDs representing the state of the VM at the time when the checkpoint was
generated. Another option involves exporting the VM, including all its checkpoints.
Demonstration: Create checkpoints
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
Import, export, and move VMs
Hyper-V import and export tasks enable you to transfer VMs between Hyper-V hosts and create
image-based VM backups.
176
20740 Installation, Storage, and Compute with Windows Server
An export of a VM captures all its VHD files, configuration files, and any existing checkpoint files.
The VM being exported can be running or stopped.
Starting with Windows Server 2016, you can perform a VM import from an exported VM, and from
copies of VM configuration files, VHD files, and checkpoints. This is helpful in recovery scenarios in
which an OS volume failed but other VM files remained intact.
When using the Import Virtual Machine Wizard, you also have the option to fix incompatibilities
that might exist between the source and target Hyper-V hosts, such as different virtual switches or
processors.
When performing an import, you need to choose one of the following options:

Register the VM in-place (use the existing unique ID). This option recreates a VM without
changing the location of its files or its ID.

Restore the VM (use the existing unique ID). This option copies the VM files either to their
original file system path or to another location you specify, and reuses the original ID. This
option is equivalent to restoring a VM from backup.

Copy the VM (create a new unique ID). This option is similar to the restore but the imported
machine gets assigned a new, automatically generated ID.
Live migration of a VM is the process of moving the VM from one Hyper-V host to another while the
VM is still running. However, this won’t affect users because the VM’s state is maintained during a
live migration as are the network connections to applications that are in use.
Before Windows Server 2012, the live migration of a VM from one Hyper-V host to another required
shared storage and failover clustering. When the live migration was performed, only configuration
information was moved between the Hyper-V hosts. Starting with Windows Server 2012, you could
perform live migration without failover clustering or shared storage. If the VM is stored on an SMB
share, only the VM configuration data is moved. If the VM is stored locally on a Hyper-V host, all the
VM data is copied to the new Hyper-V host. Moving a VM on local storage takes significantly longer
than it would with shared storage.
PowerShell Direct overview
PowerShell supports creating remote sessions that run on a remote server accessible over a
network. It also allows running commands and scripts remotely from the local PowerShell
console. This functionality, referred to as PowerShell Remoting, relies on the Windows Remote
Management (WinRM) service running on the source and target computer along with a network
connection between them.
Starting with Windows Server 2016, you have the option to establish a session from a Hyper-V host
to any of its VMs by using PowerShell Direct to run PowerShell commands and scripts remotely.
This functionality also relies on the WinRM service. But unlike PowerShell Remoting, it doesn’t
require network connectivity.
177
20740 Installation, Storage, and Compute with Windows Server
PowerShell Direct requires that:

The host and guest operating systems are running Windows Server 2016 or later.

You’re running PowerShell as an Administrator.

You provide valid credentials to access the target VM.

The VM configuration version is 8.0 or later.
To start a PowerShell Direct session on a VM, from the Hyper-V host, run the following command
where <vmname> is the placeholder designating the name of the target VM:
Enter-PSSession -VMName <vmname>
To invoke one or more commands on a VM by using PowerShell Remoting, from the Hyper-V host,
run the following command:
Invoke-Command -VMName <vmname> -ScriptBlock {< PowerShell commands>}
Demonstration: Use PowerShell Direct
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
Lab 6: Install and configure Hyper-V
Please refer to the online lab to supplement your learning experience with exercises.
Knowledge check
Check your knowledge by answering these questions.
1. What is the minimum amount of memory required to install the Hyper-V role?
a. 1 GB
b. 4 GB
c. 8 GB
d. 16 GB
2. Which PowerShell cmdlet should you run from the Hyper-V hosts to enable nested
virtualization?
a. Set-VMProcessor
b. Set-VM
c. Update-VMVersion
d. Invoke-Command
178
20740 Installation, Storage, and Compute with Windows Server
3. Which feature should you enable to allow nested VMs to communicate with an external
network via the Hyper-V physical network adapter?
a. Discrete device assignment
b. Dynamic VMMQ
c. MAC address spoofing
d. SET
4. Which virtual disk format should you use to implement disk sharing among VMs?
a. .avhd
b. .vhd
c. .vhdx
d. .vhds
5. Which virtual disk type relies on a parent disk to store read-only content?
a. Fixed-size
b. Dynamically expanding
c. Differencing
d. Pass-through
6. Which feature does SMB Direct rely on to maximize throughput and minimize latency of
virtualized workloads?
a. Discrete device assignment
b. Hot Adding
c. RDMA
d. Smart Paging
7. What is the maximum number of physical network adapters that SET can use?
a. 4
b. 8
c. 16
d. 32
Note: To find the answers, refer to the Knowledge check slides in the PowerPoint
presentation.
179
20740 Installation, Storage, and Compute with Windows Server
Module 6: Deploy and manage
Windows Server and Hyper-V
containers
The concept of virtualization has evolved to include many aspects within the network environment.
What started off primarily as virtual machines (VMs) has expanded to include virtual networking,
virtual applications, and containers to comprise what’s referred to as a software-defined
infrastructure. This module introduces you to the concept of using and managing containers,
which you use to virtualize and package application code and processes so you can provide a
protected and isolated environment, and it also provides an overview of Docker.
By completing this module, you’ll achieve the knowledge and skills to:

Describe containers in Windows Server.

Deploy containers.

Explain how to install, configure, and manage containers using Docker.
Lesson 1: Overview of containers in
Windows Server
Windows Server 2022 supports the development, packaging, and deployment of apps and their
dependencies in Windows containers. You can package, provision, and run applications across
diverse environments on-premises or in the cloud using container technology. Windows containers
provide a lightweight and isolated virtualization environment at the operating system (OS) level to
make apps easier to develop, deploy, and manage.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe Windows Server containers.

Describe Hyper-V containers.

Describe usage scenarios for containers.

Describe installation requirements for containers.
180
20740 Installation, Storage, and Compute with Windows Server
What are containers?
Traditionally, a software application is developed to run only on a supported processor, hardware,
and OS platform. Software applications typically can’t move from one computing platform to
another without extensive recoding to support the intended platform. With many diverse
computing systems, a more efficient software-development and management platform was
needed to support portability between multiple computing environments.
A container is used for packaging an application with all its dependencies and abstracting it from
the host OS in which it’s to run. Not only is a container isolated from the host OS, but it’s also
isolated from other containers. Isolated containers provide a virtual runtime, improving the security
and reliability of the apps that run within them.
The benefits of using containers include the following:

Ability to run anywhere. Containers can run on various platforms such as Linux, Windows, and
Mac operating systems. You can host them on a local workstation, on servers in on-premises
datacenters, or provisioned in the cloud.

Isolation. To an application, a container appears to be a complete OS. The CPU, memory,
storage, and network resources are virtualized within the container isolated from the host
platform and other applications.

Increased efficiency. You can quick deploy, update, and scale containers to support a more
agile development, test, and production life cycle.

A consistent development environment. Developers have a consistent and predictable
development environment that supports various development languages such as Java, .NET,
Python, and Node. Developers know that no matter where the application is deployed, the
container will ensure that the application runs as intended.
How do containers work?
The processor in a standard Windows computer has two different modes: a user mode and
a kernel mode. Core OS components and most device drivers run in kernel mode, whereas
applications run in user mode.
When you install container technology on a computer, each container creates an isolated,
lightweight silo for running an app on the host OS. A container builds upon and shares most
of the host operating system’s kernel to gain access to the file system and registry.
Each container has its own copy of the user mode system files, isolated from other containers and
the host’s own user mode environment. A container base image provides the ability to isolate user
mode, also referred to as a template, which consists of the user mode system files needed to
support a packaged app. Container base image templates provide a foundational layer of OS
services used by the containerized app that aren’t provided (or restricted) from the host’s kernel
mode layer.
181
20740 Installation, Storage, and Compute with Windows Server
VMs versus containers
Both VMs and containers are virtualization technologies that provide isolated and portable
computing environments for applications and services.
As previously described, containers build upon the host operating system’s kernel and contain an
isolated user mode process for the packaged app. This helps to make containers very lightweight
and quick to launch.
Figure 1 and Figure 2 display the architectural differences between containers and VMs:
Figure 16: Container architecture
Figure 17: VM architecture
VMs simulate an entire computer, including the virtualized hardware, OS, user mode, and its own
kernel mode. A VM is quite agile and provides tremendous support for applications; however, VMs
tend to be large and can take up a lot of resources from the host machine.
182
20740 Installation, Storage, and Compute with Windows Server
Table 1 summarizes the similarities and differences between containers and VMs:
Table 15: Containers vs. VMs
Feature
VM
Container
Isolation
Provides complete isolation from the host OS and
other VMs.
Provides lightweight isolation
from the host and other
containers.
OS
Runs a complete OS, including the kernel.
Only runs the user mode portion
of an OS.
Guest
compatibility
Able to run any supported OS inside the VM.
If running in Process isolation
mode, you must run on the same
type of kernel (OS) as the host;
Hyper-V isolation mode provides
more flexibility in running nonWindows containers on Windows
hosts.
Deployment
Deployed using Hyper-V manager or other VM
management tools.
Deployed and managed using
Docker. You can deploy multiple
containers using an orchestrator
such as Azure Kubernetes
Service.
Persistent
storage
Uses virtual hard disk files or Server Message
Block (SMB) share.
Azure disks for local storage;
Azure files (SMB shares) for
storage shared by multiple
containers.
Load balancing
Uses Windows failover cluster to move VMs as
needed.
It uses an orchestrator to start
and stop containers
automatically.
Networking
Uses virtual network adapters.
Creates a default network
address translation (NAT)
network, which uses an internal
vSwitch and the WinNAT
Windows component.
183
20740 Installation, Storage, and Compute with Windows Server
Use a VM when you:

Need to manage several operating systems.

Need to run an app that requires all the resources and services of an entire OS, such as a
graphical user interface (GUI).

Need an environment that preserves changes and is persistent.

Require complete isolation and security.
Use a container when you:

Need a lightweight application package that quickly starts.

Need to deploy multiple instances of a single app.

Need to run an app or process that’s nonpersistent on an on-demand basis.

Need to deploy an app that can run on any underlying infrastructure.
Note: It’s quite common to provision containers within a highly optimized VM to provide
enhanced isolation and security. The next topic describes isolation modes, which provide
an option to provision container runtime isolation.
Isolation modes
Windows containers can run in one of two distinct isolation modes. Both modes support identical
processes for creating, managing, and running containers. However, there’s a difference between
the degree of isolation and security between the container, other containers, and the host OS.
Windows containers support the following isolation modes:

Process Isolation. Considered the traditional isolation mode for Windows containers, process
isolation allows multiple container instances to run concurrently on a host. When running in
this mode, containers share the same kernel and host OS. Each provisioned container features
its own user mode to allow Windows and app processes to run in isolation from other
containers. When you configure Windows containers to use process isolation, containers can
run multiple apps in isolated states on the same computer, but they don’t provide securityenhanced isolation.

Hyper-V Isolation. With Hyper-V isolation, each container runs inside a highly optimized VM. The
advantage of this mode is that each container effectively gets its own kernel, providing an
enhanced level of stability and security. The VM provides an additional layer of hardware-level
isolation between each container and the host computer. When deployed, a container using
Hyper-V isolation mode starts in seconds, much faster than a VM with a complete Windows OS.
Note: Windows containers running on Windows Server default to using process isolation.
Windows containers running on Windows 11 Pro and Enterprise default to running in
Hyper-V isolation mode.
184
20740 Installation, Storage, and Compute with Windows Server
Container definitions
You should be familiar with the following terms that relate to containers:

Container host. Consists of the physical computer or VM that’s installed with the Windows
containers feature.

Sandbox. Consists of all changes made to a container, including:
o
File system.
o
Software installations.
o
Registry.

Container image. Enables you to capture changes made to a container in its sandbox. For
example, if you install an app in a container and capture its image, you can create new
containers based on that new image. Each will have the app installed and include any related
changes.

Container OS image. Provides the OS environment and is immutable. This image is the first
layer of potentially several image layers that make a container.

Container repository. Stores the container image and its dependencies when you make a
container image.
Overview of containers in Windows Server
Windows Server provides support for two types of containers:

Windows Server containers. Provide app isolation through process and namespace isolation
technology. These containers share the OS kernel with the host and all containers that are
running on the host. Although this provides for quick startup, it doesn’t ensure complete
isolation of each container.

Hyper-V containers. Extend the isolation of Windows Server containers by running each
container in a VM. These containers don’t share the OS kernel with the host, but if you run
more than one container in a VM, those containers share the VM’s kernel.
Containers appear like a complete OS to apps running within them. So, in some respects,
containers are similar to VMs because they run an OS, support file systems, and can be accessed
across the network. Having said that, the technology and concepts of containers are very different
from VMs.
Overview of Windows Server containers
A physical computer or VM has a single-user mode that runs on top of the single kernel mode.
Windows Server allows for multiple user modes, which enables you to run multiple isolated apps.
When using Hyper-V virtualization, you can have multiple VMs, each of which has its own kernel
mode and user mode. Each app can run in its own user mode on its own VM. Containers enable
you to provide several user modes per kernel mode.
185
20740 Installation, Storage, and Compute with Windows Server
With Windows Server containers, your server deploys with the Windows Server OS with a kernel
mode and a user mode. In user mode, the OS manages the container host (the computer that
hosts containers). A special stripped-down version of Windows Server is used to create a
container.
Note: This stripped-down version is stored in a container repository as a container OS image.
The Windows container features only a user mode, which enables Windows processes and app
processes to run in the container. These processes are isolated from the user mode of any other
containers.
Important: This is the key difference between Windows containers and Hyper-V containers
because a Hyper-V VM runs a guest OS with both a user and a kernel mode.
Although when you virtualize the user mode of the OS, Windows Server containers allow multiple
apps to run in an isolated state on the same computer, they don’t offer security-enhanced
isolation.
Overview of Hyper-V containers
VMs provide an isolated environment for running apps and services and a full guest OS with both
kernel and user modes. On the physical host running the Hyper-V role, the management OS is
known as the parent partition, which is responsible for managing the host. Each VM, or child
partition, runs an OS with both user and kernel modes.
Hyper-V containers are also deployed child partitions. However, the guest OS in a Hyper-V
container isn’t the entire Windows OS; rather, it’s a pared-down, optimized version of Windows
Server. Security-enhanced isolation is provided between:

The Hyper-V container.

Other Hyper-V containers on the host.

The hypervisor.

The host’s parent partition.
Hyper-V containers:

Use the base container image that’s defined for the app.

Automatically create a Hyper-V VM by using that base image.

Start up in seconds, far faster than a VM with a full Windows OS.

Feature an isolated kernel mode, a user mode for core system processes, and a container user
mode.
186
20740 Installation, Storage, and Compute with Windows Server
After a Windows container is started and is running inside a Hyper-V VM:

The app is provided with kernel isolation and separation of the host patch and the version
level.

You can choose the level of isolation you require during deployment by choosing either a
Windows container or a Hyper-V container.
Note: With multiple Hyper-V containers, you can use a common base image that doesn’t
need manual management of VMs; the VMs create and delete themselves automatically.
Usage scenarios
You can use either Windows Server containers or Hyper-V containers for numerous practical
applications in your organization. Although there are many similarities between Windows Server
containers and Hyper-V containers, there are some key differences. Whether you choose Windows
Server or Hyper-V containers depends on your specific needs.
Windows Server containers
Windows Server containers are preferred in scenarios where:

The OS trusts the apps it hosts.

All the apps trust each other.
To put it another way, the host OS and apps occupy the same trust boundary, which often is true
with respect to:

Multiple-container apps.

Apps that compose a shared service of a larger app.

Apps from the same organization.
Ensure that the apps you host in Windows Server containers are stateless.
Important: Stateless apps don’t store any state data in their containers.
It’s also important to remember that containers don’t provide a GUI, so some apps aren’t suited to
this environment. So, stateless web apps with no GUI make ideal candidates for using Windows
container technologies in Windows Server.
You can use containers to package and deliver distributed apps quickly. For example, you might
have a line- of-business (LOB) app that requires multiple deployments, perhaps weekly or even
daily. Windows Server containers are an ideal way to deploy these apps because you can create
packages by using a layered approach to building a deployable app.
187
20740 Installation, Storage, and Compute with Windows Server
With Windows Server containers, your developers can spend more time developing apps while
requiring fewer resources. This is because, compared to Hyper-V containers, Windows Server
containers:

Start quicker.

Run faster.

Support a greater density.
Hyper-V containers
Hyper-V containers:

Each has its own copy of the Windows OS kernel.

Have memory assigned directly to them.
Note: This is a key requirement of strong isolation.
Similar to VMs, you would use Hyper-V containers in scenarios that require CPU, memory, and
input/output (I/O) isolation. The host OS shares a small, constrained interface with the container
for communication and sharing of host resources. This sharing, while limited, means that Hyper-V
containers:

Start up more slowly than Windows Server containers.

Provide the isolation you require to enable untrusted apps to run on the same host.
The trust boundary in Hyper-V containers provides security-enhanced isolation between:

The Hyper-V containers on the host.

The hypervisor.

The host’s other processes.
For this reason, Hyper-V containers are the preferred virtualization model in multitenant
environments.
Installation requirements for containers
When planning for containers in Windows Server, you should be familiar with the requirements for
Windows Server and understand the supported scenarios for both Windows Server containers and
Hyper-V containers in Windows Server.
188
20740 Installation, Storage, and Compute with Windows Server
Windows container host requirements
When you’re planning your deployment, the Windows container host has the following
requirements:

The Windows container role is available on:
o


o
Windows Server 2016 and newer.
Windows 10 and newer.
If you deploy Hyper-V containers, you must first install the Hyper-V server role.
Windows Server container hosts must have the Windows OS installed on drive C.
Tip: This restriction doesn’t apply if you deploy only Hyper-V containers.
Virtualized container host requirements
If you deploy a Windows container host on a Hyper-V VM that’s hosting Hyper-V containers, you
must enable nested virtualization, which has the following requirements:

At least 4 gigabytes (GB) of memory available for the virtualized Hyper-V host

On the host system:
o


Windows Server 2016 or newer.
o
Windows 10 or newer.
o
Windows Server 2016 or newer.
On the container host VM:
o
Windows 10 or newer.
A processor with Intel VT-x and Extended Page Tables (EPT) technology.

A Hyper-V VM with configuration version 8.0 or newer.

At least two virtual processors for the container host VM.
Lesson 2: Prepare for containers
deployment
To support your organization’s app requirements, you should understand the fundamentals of how
to enable and configure Windows Server to support containers.
By completing this lesson, you’ll achieve the knowledge and skills to:

Prepare Windows Server containers for deployment.

Prepare Hyper-V containers for deployment.

Deploy package providers.
189
20740 Installation, Storage, and Compute with Windows Server
Prepare Windows Server containers
When you plan to use containers in Windows Server, you must first deploy a container host.
Remember that you can:

Deploy containers on a physical host computer or within a VM.

Use Windows Server with or without Desktop Experience.
Use the following high-level procedure to prepare Windows Server for containers:
1. Install the container feature. This step enables the use of Windows Server and Hyper-V
containers.
2. Create a virtual switch. All containers connect to a virtual switch for network communications.
The switch type can be Private, Internal, External, or NAT.
3. Configure NAT. If you want to use a virtual switch configured with NAT, you must configure the
NAT settings.
4. Configure media access control (MAC) address spoofing. If your container host is virtualized,
you must enable MAC address spoofing.
Prepare Hyper-V containers
As with Windows containers, when you plan to use containers in Windows Server, you must first
deploy a container host, and then use the following high-level steps to prepare your Windows
Server host for Hyper-V containers:
1. Install the container feature.
2. Enable the Hyper-V role.
3. Enable nested virtualization.
4. Configure virtual processors.
5. Create a virtual switch.
6. Configure NAT.
7. Configure MAC address spoofing.
Deploy package providers
When you deploy containers, you start with a base image, such as a Windows Server Core image.
Base images aren’t included in Windows Server, so you must use a package provider to retrieve
and manage the base images for your container deployments. All package providers are installed
on demand.
Tip: The exception is PowerShellGet.
190
20740 Installation, Storage, and Compute with Windows Server
After you’ve installed and imported a provider, you can search, install, and perform an inventory on
that provider’s software packages. Each provider includes specific Windows PowerShell cmdlets.
The following are common packages:

PowerShellGet. Installs Windows PowerShell modules and scripts from the online gallery.

ContainerImage. Use to discover, download, and install Windows container OS images.

DockerMsftProvider. Use to discover, install, and update Docker images.

DockerProvider. Use for Docker Enterprise Edition for Windows Server.

NuGet. Use for C# packages.

WSAProvider. Use to discover, install, and inventory Windows Server App (WSA) packages.

MyAlbum. Use to discover photos in a remote file repository and install them locally.
After you locate the package providers you want, you must install them on your computer. You can
use the Install-PackageProvider Windows PowerShell cmdlet to install package providers
that are available in package sources that registered with PowerShellGet. After you install the
package provider, it enables additional Windows PowerShell cmdlets or application programming
interfaces (APIs).
Lesson 3: Install, configure, and manage
containers
Docker provides functionality for apps in both the hybrid cloud and on-premises locations for
Windows and Linux environments. In this lesson, you’ll learn how to install, configure, and manage
containers by using Docker.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe Docker, its components, and support for Docker in Windows Server 2022.

Describe usage scenarios for Docker.

Describe Docker management.

Describe Docker hub.

Describe Docker in Azure.
What is Docker?
Docker is a collection of open-source tools, solutions, and cloud-based services that provide a
common model for packaging or containerizing app code into a standardized unit for software
development. Called a Docker container, this standardized unit is software that’s wrapped in a
complete file system that includes everything it needs to run: code, runtime, and system tools
libraries.
191
20740 Installation, Storage, and Compute with Windows Server
At the heart of Docker is the Docker engine, which is a runtime environment that runs on Windows,
macOS, or Linux operating systems. The Docker client provides a command-line interface for
management.
Docker containers are standards-based, which enables them to run on all major Linux distributions
and Microsoft operating systems. In fact, Docker containers can run on any computer,
infrastructure, and in the cloud.
With Docker, you can create, remove, and manage containers. You can also browse the Docker
Hub to access and download prebuilt images. In most organizations, the most common
management tasks that use Docker include:

Automating the creation of container images using Dockerfile on a Windows OS.

Managing containers by using Docker.
Docker support in Windows Server
Windows Server provides a built-in, native Docker daemon for Windows Server hosts, so you can
use Docker containers, tools, and workflows in production Windows environments.
Docker Enterprise Edition for Windows Server
Docker Enterprise Edition (EE) provides a Containers as a Service (CaaS) platform for IT that
manages and helps secure varied applications across disparate operating systems and
infrastructures, both in the cloud and on-premises. Docker EE works with both Linux and Windows
operating systems and with various cloud-service environments. Docker EE is simple to install and
enables you to create an environment that’s both native and optimized for your platform.
Docker EE has the following features:



A certified infrastructure that delivers an integrated environment for:
o
Enterprise Linux.
o
Amazon Web Services.
o
Windows Server.
o
Microsoft Azure.
Certified Containers.
Certified Plug-ins.
Docker EE for Windows Server provides several features and benefits, including that it:

Is available free of charge.

Enables lightweight containers that start up fast and optimize use of system resources.

Provides container isolation that helps eliminate conflicts of dissimilar versions of Internet
Information Services (IIS) and .NET by coexisting on a single system.

Utilizes new base container images such as Windows Server Core.
192
20740 Installation, Storage, and Compute with Windows Server

Provides a consistent user experience because it has the same commands as Docker for
Linux.

Adds isolation properties with Hyper-V containers selected at runtime.
Docker components
It’s important you’re familiar with basic Docker terminology, including the following:

Image. A stateless collection of root-file system changes in the form of layered file systems
that are stacked on one another.

Container. A runtime instance of an image, consisting of:
o
The image.
o
A standard set of instructions.
o
The operating environment.

Dockerfile. A text file that contains the commands you need to build a Docker image.

Build. The process of building Docker images from a Dockerfile and any other files in the
directory where the image is being built.

Docker toolbox. A collection of platform tools including:
o
Docker Engine. Use for building and running Docker containers.
o
Docker Compose. Enables you to define a multiple-container app together with any
dependencies so that you can run it with a single command.
o
Docker Machine. Enables you to provision Docker hosts by installing the Docker Engine on
a computer in your datacenter or on a cloud provider.
o
Docker client. Includes a command shell that’s preconfigured as a Docker command-line
environment.
o
Docker Registry. Forms the basis for the Docker Hub and Docker Trusted Registry.
o
Docker Swarm. Enables you to combine multiple Docker Engines into a single virtual
Docker Engine.
In addition to the components and tools in the Docker toolbox, the following Docker solutions are
also important to understand:

Docker Hub. A cloud-hosted service for registering and sharing your Docker images.

Docker Trusted Registry. Enables you to store and manage images in on-premises
environments or in a virtual private cloud.

Universal Control Panel. Enables you to manage Docker apps.

Docker Cloud. Enables you to deploy and manage your Docker apps.

Docker Datacenter. Is an integrated platform for deploying CaaS in on-premises environments
and in a virtual private cloud.
193
20740 Installation, Storage, and Compute with Windows Server
Usage scenarios
When you begin to use containers, you’ll discover that you’ll be working with many containers for
each app. Tracking and managing these containers requires organization. By using Container
Orchestration, you can begin to bring order to your containers. Additionally, advanced
administration tools can help make container management easier.
Container Orchestration
Container Orchestration enables you to define how to coordinate the containers used when you
deploy a packaged app that utilizes multiple containers. By using Docker tools at a command line
interface, you can manage individual containers fairly easily. But when you’re working with a large
number of containers, that approach becomes less feasible; this is where orchestration is helpful.
DevOps
There are numerous tools in the Docker platform that provide your developers with tools and
services which they can use to:

Build and share images through a central repository of images.

Collaborate on developing containerized apps by using version control.

Manage infrastructure for apps.
Docker can help your developers build, test, deploy, and run distributed apps and services. With
Docker for Windows, your developers can now use Docker tools for Microsoft Visual Studio when:

They add Docker assets for Debug and Release configurations to their projects.

They add a Windows PowerShell script to their project to coordinate the build and composition
of containers.
Microservices
Using microservices provides an approach to app development in which all parts of an app deploy
together as a fully self-contained component. When you construct an app with microservices, each
subsystem is a microservice.
Microservices scale well. For example, in your test and development environment, perhaps on a
single computer, microservices might each have a single instance. However, when you run the app
in your production environment, based on resource demands, each microservice scales out to
multiple instances spanning a cluster of servers.
194
20740 Installation, Storage, and Compute with Windows Server
Using Docker containers in this scenario provides the following benefits:

Each microservice can quickly scale out to meet increased load.

The namespace and resource isolation of containers also helps prevent one microservice
instance from interfering with others.

The Docker packaging format and APIs unlock the Docker ecosystem for the microservice
developer and app operator.
Demonstration: Deploy Docker Enterprise Edition and use
Docker to pull an image
For high-level demonstration steps, refer to the Notes, which are in the accompanying Microsoft
PowerPoint presentation.
Overview of management with Docker
You can use Docker to support a container environment. After you install Docker, use the following
commands to manage your containers:

docker images. This lists the available images on your container host. You might recall that
you use container images as a base for new containers.

docker run. This creates a container by using a container image. For example, the following
command creates a container using the default process isolation mode, named IIS, and based
on the Windows Server Core container image:
docker run --name IIS -it windowsservercore

docker commit. This commits the changes you made to a container to a new container
image. The commit operation doesn’t include data contained in volumes mounted within the
container. Note that the container will be paused by default as the new container image is
created.

docker stop. This stops a running container.

docker rm. This removes an existing container.
Download container base images
After you install Docker Engine, the next step is to pull a base image, which is used to provide a
foundational layer of OS services for your container. You can then create and run a container
based on the base image.
195
20740 Installation, Storage, and Compute with Windows Server
A container base image includes:

The user mode OS files needed to support the provisioned application.

Any runtime files or dependencies required by the application.

Any other miscellaneous configuration files the app needs to provision and run properly.
Microsoft provides the following base images as a starting point to build your own container image:

Windows Server Core. An image that contains a subset of the Windows Server application
programming interfaces (APIs), such as the full .NET framework. It also includes most server
roles.

Windows. Contains the complete set of Windows APIs and system services; however, it doesn’t
contain server roles.

Windows Internet of Things (IoT) Core. A version of Windows used by hardware manufacturers
for small IoT devices that run ARM or x86/x64 processors.
Note: The Windows host OS version must match the container OS version. To run a container
based on a newer Windows build, you must ensure that an equivalent OS version is installed
on the host. If your host server contains a newer OS version, you can use Hyper-V isolation
mode to run an older version of Windows containers. To determine the version of Windows
installed, run the ver command from the command prompt.
The Windows container base images are discoverable through the Docker Hub and are
downloaded from the Microsoft Container Registry (MCR). You can use the Docker pull command
to download a specific base image. When you enter the pull command, you specify the version
that matches the version of the host machine.
If you want to pull a 2019 LTSC Server core image, run the following command:
docker pull mcr.microsoft.com/windows/servercore:ltsc2019
After you download the base images needed for your containers, you can verify the locally available
images and display metadata information by entering the following command:
docker image ls
Overview of Docker Hub
The Docker Hub is a web-based online library service with which you can:

Register, store, and manage your own Docker images in an online repository and share them
with others.
196
20740 Installation, Storage, and Compute with Windows Server

Access over 100,000 container images from software vendors, open-source projects, and
other community members.

Download the latest versions of the Docker Desktop.
Docker Hub provides the following major features and functions:

Image repositories. Docker Hub contains images stored in repositories, within which you can
work with image libraries to build your containers. Repositories contain:
o
Images.
o
Metadata about those images.
o
Layers.

Organizations and teams. Enables you to create organizations or work groups where you can
collaborate.

Automated builds. Help automate the build and update of images from GitHub or Bitbucket,
directly on Docker Hub.

Webhooks. Attaches to your repositories and enables you to trigger an event or action when an
image or updated image successfully pushes to the repository.

GitHub and Bitbucket integration. Enables you to add the Docker Hub and your Docker images
to your current workflows.
Docker with Azure
Docker automates the deployment of your app as a portable, self-sufficient container that can run
nearly anywhere—including in cloud platform as a service (PaaS) solutions. By making Docker
containers smaller than VMs, more are able to run on a single host, meaning:

They start up more quickly.

They’re considerably more portable.
These characteristics make Docker apps ideal for a PaaS offering, such as Azure, with which you
have the flexibility to deploy Docker using one of several scenarios. You can:

Use the Docker Machine Azure driver to deploy Docker hosts within Azure. Docker uses Linux
containers rather than VMs to isolate app data and computing on shared resources. A common
scenario of this approach is when you need to prototype an app quickly.

Use the Azure Docker VM extension for template deployments. Enables you to integrate with
Azure Resource Manager template deployments and includes all the related benefits such as
role-based access, diagnostics, and post-deployment configuration.
Tip: The Azure Docker VM extension installs and configures the Docker client, the Docker
daemon, and Docker Compose in your Linux VM.
197
20740 Installation, Storage, and Compute with Windows Server

Deploy an Azure container service cluster. Provides rapid deployment of container clustering
and orchestration solutions. By using the Azure Container Service, you can deploy clusters with
Azure Resource Manager templates or the Azure portal.
Demonstration: Deploy containers by using Docker
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
Lab 7: Install and configure containers
Please refer to our online lab to supplement your learning experience with exercises.
Knowledge check
Check your knowledge by answering these questions.
1. You download a container base image. When you attempt to create and run a container using
the base image, you get an error message that relates to incompatibility with the host
machine. What should you do?
2. How does a Hyper-V container differ from a Windows container?
3. When configuring Windows Server containers, what Windows PowerShell cmdlet do you use to
create a container and what is the equivalent Docker command?
Note: To find the answers, refer to the Knowledge check slides in the accompanying
Microsoft PowerPoint presentation.
198
20740 Installation, Storage, and Compute with Windows Server
Module 7: Overview of high
availability and disaster recovery
This module provides an overview of high availability and high availability with failover clustering in
Windows Server. It further explains how to plan high availability and disaster recovery solutions
with Hyper-V virtual machines (VMs). Additionally, this module explains how to back up and restore
the Windows Server operating system (OS) and data by using Windows Server Backup.
After completing this module, you should be able to:

Describe levels of availability.

Plan for high availability and disaster recovery solutions with Hyper-V VMs.

Describe Network Load Balancing (NLB).

Backup and restore data by using Windows Server Backup.

Describe high availability with failover clustering in Window Server.
Lesson 1: Define levels of availability
By implementing high-availability solutions, you can help to ensure that your computinginfrastructure systems can survive the failure of one server or even multiple servers. If an app
must be highly available, you must consider more than just the app’s components. Its supporting
infrastructure and services must be highly available, as well. In this lesson, you’ll learn about
different high-availability solutions and how to plan for them.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe high availability, continuous availability, and business continuity.

Create a disaster recovery plan.

Describe highly available networking.

Describe highly available storage.

Describe highly available compute or hardware functions.
What is high availability?
High availability is an approach that seeks to ensure IT systems and apps remain operational
despite the failure of one or more components. An important design goal is to design a solution
that removes all single points of failure. To reach this goal, IT staff must examine various elements
of their underlying IT infrastructure and determine whether they contain single points of failure,
and whether and how they can mitigate those failure points.
199
20740 Installation, Storage, and Compute with Windows Server
The following list describes components and elements of an IT solution that you might investigate
during planning:

Datacenter infrastructure. Your datacenter houses your server and related infrastructure, so
it’s important that the supply of power is reliable and, where necessary, has a suitable backup
supply. Additionally, cooling must be sufficient and reliable to help ensure servers don’t go
offline due to excessive heat.

Server hardware. Server components, especially those with moving parts, must have
redundancy, and you also should select error-correction code (ECC) memory to help resolve
minor memory errors.

Storage. Consider implementing fault-tolerant disk configurations, such as Redundant Array of
Independent Disks (RAID). Ensure that no single disk failure can cause an app failure.

Network infrastructure. In your local area network (LAN), there are highly available, select
redundant components such as switches. Where necessary, add additional network interface
cards (NICs) to the servers. If your organization supports multiple locations over a wide area
network (WAN), redundancy is often the responsibility of your service provider. However,
ensure this is true.

Internet connectivity. You might need to subscribe to multiple internet service providers to
help guarantee availability of services delivered over the internet. Many internet-facing routers
support a capability that enables switching between providers.

Network services. Ensure that infrastructure services, such as Active Directory, Domain Name
System (DNS) name resolution, and Dynamic Host Configuration Protocol (DHCP) are
sufficiently highly available. Generally, this is fairly straightforward as all these services
can be provided by multiple servers simultaneously.
What is continuous availability?
Continuous availability is a design goal that seeks to provide continuous availability of apps and
services not only during failover scenarios but also during planned downtime. To achieve this goal,
you must collect data from the following sources.

Business-impact analysis. Identifies your organization’s critical business processes and the
loss or damage arising from their failure or disruption.

Risk analysis. Identifies risks and the likelihood of them occurring. Also identifies single points
of failure.
There are several strategies you can use to help you implement continuous availability, and each
varies based on the apps you’re assessing. Continuous availability seeks to ensure access to data
and services during server maintenance, so you must involve specific administration staff for each
app in your assessment, such as the Exchange Server or SQL Server administration teams.
200
20740 Installation, Storage, and Compute with Windows Server
What is business continuity?
Business continuity seeks to ensure that your business can continue to operate in the event of
failures in your IT infrastructure and related services. There are several important requirements
you should consider when planning for business continuity, including the:

Service level agreements (SLAs) for your IT infrastructure, including both software and
hardware.

Contact information and technical skills of all available IT administrators.

Provisioning of a secondary site from which mission-critical apps and data can be accessed.

Possible workaround solutions for problems you might encounter.

Maximum duration of an outage for your mission-critical apps.
It’s important to involve not only IT staff in these discussions but also business managers, who can
provide information about the impact on the business of unavailability, while IT staff can seek
remediations for those periods of unavailability.
To plan your strategies for implementing business continuity, you should collect data from:

Business-impact analysis.

Risk analysis.
You can implement one or more of the following technologies to help achieve business continuity:

NLB.

Failover clustering on physical or VMs.

Application-aware high availability.

Conventional data backups.

Online backups.

VM backups.
Create a disaster-recovery plan
Although much of this lesson identifies ways in which you can seek to avoid failures, ultimately,
you must prepare a disaster-recovery plan. This provides guidance in scenarios when failures have
occurred.
Develop a recovery plan
There are several factors to consider when you develop a recovery plan, including:

What data should you recover? Typically, you’ll want to recover everything. However, you might
choose to perform only a partial recovery to ensure you meet business-continuity objectives.
You might then perform a full recovery later. This approach is beneficial when you have large
volumes of data, not all of which is mission critical.
201
20740 Installation, Storage, and Compute with Windows Server

Where should you locate recovered data? If you have replacement hardware, such as storage
disks or a spare server chassis. However, with many if not most workloads virtualized these
days, it’s not always necessary to wait for replacement hardware. For example, you could shift
the workload to an alternate Hyper-V host.

When should you perform the recovery? Ideally, this would occur immediately. However, that
might not always be practical, especially for small branch offices. If a server fails in a branch
office, and it will take 48 hours to ship a replacement, you might need to seek an alternative
remedy during the period.
Test and review your recovery plan
It’s important that you test your recovery plan. This helps to identify problems with your plan
without impacting your business. You can then seek to design around those problems to help
ensure a more efficient recovery when a genuine failure occurs.
Your IT infrastructure changes constantly. For example, new apps might be introduced. Therefore,
you should review your recovery plan when changes occur to ensure it’s still valid.
Define an SLA
Your organization’s SLA should describe the responsibilities of your IT department or your IT
service provider in relation to your organization’s business-critical IT solutions and data. Focus on:

Availability

Performance

Protection
Additionally, your SLA might define how quickly a provider must restore your services after a
failure, and it should include the following elements:

Hours of operation. Defines when the data and services should be available to your users and
should define how much planned downtime is permitted.

Service availability. Defines, as a percentage of time, the expected uptime of your apps and
services. For example, 99.9 percent service availability defines a permitted downtime not to
exceed 0.1 percent per year.

Recovery point objective (RPO). Defines a limit on how much data you can afford to lose
because of a failure. If your organization sets an RPO of six hours, you must perform a backup
every six hours. This ensures that you can recover the data that your RPO defines. If your RPO
is measured in minutes, and you have a large amount of data, you might need to be creative in
planning your recovery. A realistic RPO must balance your desired recovery time with your
network infrastructure’s realities.

Recovery time objective (RTO). Defines the time it takes to recover from failure. This varies
based on the type of failure. For example, a hard-disk failure in a server has a different RTO
than a motherboard failure in a critical server.
202
20740 Installation, Storage, and Compute with Windows Server

Retention objectives. Measures the length of time to store backed-up data. You might need to
recover data from last month quickly. However, you need to retain several years of data for
compliance reasons. Archived data might be slower to recover due to how it’s stored, and your
SLA should recognize this.

System performance. Defines what’s acceptable throughout your SLA for critical apps. If the
systems supporting a critical app are working, but slowly, they might not meet your users’
needs. You should consider this factor and include it in your SLA.
Highly available networking
Because your apps are delivered across your network infrastructure, it’s vital that your network is
highly available. Planning for network high availability should include:

Network adapters. Implement multiple network adapters. This not only provides high
availability but also improvements in performance throughput, depending on the configuration.

Multipath input/output (MPIO) software. When implemented with multiple host adapters, this
provides alternate paths to your storage devices. This provides the highest level of redundancy
and availability.

LANs. Ensure that none of your network switches, routers, and wireless access points is a
single point of failure.

WANs. Ensure multiple paths exist between your offices over your WAN connections.

Internet connectivity. Consider selecting multiple service providers to help protect against
failures in services provided by one provider.
Highly available storage
Because apps store their data on storage devices, it’s important that you plan mitigations for
storage failure. You can choose between several technologies, including:

RAID. Provides fault tolerance by using additional disks so that storage remains available when
one, or perhaps more, disks fail. RAID uses two options for enabling fault tolerance:
o
o
Disk mirroring. Copies all data written to one disk to another disk.
Parity information. Uses a collection of disks and writes data across all disks in the
collection, and then calculates parity information which is also written across the disks.
The parity information can be used to determine missing information if a disk fails in the
collection.

Direct-attached storage (DAS). Almost all servers provide some built-in storage or DAS. You
should consider using RAID technology to provide high availability for DAS.

Network-attached storage (NAS). NAS is accessible from a storage appliance connected to the
network. You can configure storage in NAS devices to implement high availability with RAID
arrays.
203
20740 Installation, Storage, and Compute with Windows Server

Storage area network (SAN). Storage implemented over a high-performance network. You
configure SAN storage in highly available RAID arrays.

Cloud storage services. Provides organizations with storage that’s already configured with high
availability delivered by the cloud provider.
Highly available compute or hardware functions
Windows Server has numerous high availability solutions for different types of apps, including:

Failover Clustering. Enables a collection of independent servers to work together to increase
the availability of apps and core services.

NLB. Enables a collection of independent servers to distribute client requests between the
servers where NLB is running.

RAID. Is built into Windows Server and provides support for both RAID 1 and RAID 5.
Lesson 2: Plan high availability and disaster
recovery with Hyper-V VMs
Hyper-V VMs are one of the most common and most critical workloads that run on Windows
Server. Consequently, it’s important to think about high availability for VMs that run on Hyper-V.
The Hyper-V role supports failover clusters, but there are also other ways to achieve a highly
available virtual environment.
Hyper-V Replica creates and then continuously updates exact copies of VMs on different
virtualization fabrics, usually in a geographically remote location, called a secondary or disaster
site. It does so without impacting any running VMs and over slower links, usually a WAN.
Replicated VMs in a secondary site can’t be turned on during replication. In a normal situation,
VMs in the primary location are running, with Hyper-V replicas constantly replicating their updates
to the secondary site. At any given time, VMs in the secondary site are almost an identical copy
(the replication delay means they’re not truly identical). Hyper-V Replica uses transactional
replication, which ensures that updates aren’t lost during replication. If a disaster occurs at the
primary location, you can trigger failover and start running VMs at the secondary site.
In this lesson, you’ll learn about techniques and technologies for highly available VMs. You’ll also
learn about Hyper-V Replica.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe high availability considerations with Hyper-V VMs.

Describe live migration and Storage Migration.

Describe, plan, and implement Hyper-V Replica.
204
20740 Installation, Storage, and Compute with Windows Server
High availability considerations with Hyper-V VMs
To establish high availability to VMs, you can choose between several options. Because failover
clustering supports the Hyper-V role, you can implement VMs as a clustered role. That scenario is
called host clustering. Also, you can implement failover clustering inside VMs just as you would do
with physical hosts. That scenario is called guest clustering.
Host clustering
As its name implies, with host clustering you configure a failover cluster with Hyper-V host servers
as cluster nodes. In this scenario, you configure the VM as a highly available resource in a failover
cluster. Failover protection is then achieved at the host-server (Hyper-V server) level. As a result,
VMs and applications that are running within the VMs don’t have to be cluster aware. A VM, with all
apps, becomes highly available, but you don’t implement any high availability technology inside
VMs. The benefit of this approach is that you don’t have to worry if a critical app running inside a
VM is supported by failover clustering or not.
For example, a print server role is a non-cluster aware application. In the case of a cluster node
failure, which in this case is the Hyper-V host, the secondary host node takes control and restarts
or resumes the VM as quickly as possible. You can also move the VM from one node in the cluster
to another in a controlled manner. For example, you could move the VM from one node to another
while patching the host management’s OS.
In a host clustering deployment, cluster nodes are usually connected to a shared storage where
VM files are located. Only one node (Hyper-V host) controls a VM, but other nodes in a cluster can
take over ownership and control very quickly in the case of failure. A VM in such a cluster usually
experiences minimal to zero downtime.
Guest clustering
Guest failover clustering is implemented between VMs running on single or different hosts. This
scenario is configured similarly to physical-server failover clustering, except that the cluster nodes
are VMs. In this scenario, after you create two or more VMs, you enable failover clustering and
configure these VMs as cluster nodes. After that, you configure the required server role as a
clustered role. When deploying guest clustering, you can locate the VMs that are part of a cluster
on a single Hyper-V host. This configuration can be quick and cost-effective in a test or staging
environment, but you need to be aware that in such a scenario the Hyper-V host becomes a single
point of failure. Even if you deploy failover clustering between two or more VMs, if the Hyper-V host
where VMs run fails, then all VMs will also fail.
Because of this, for production environments, you should provide an additional layer of protection
for applications or services that need to be highly available. You can achieve this by deploying the
VMs on separate failover clustering-enabled Hyper-V host computers. When you implement failover
clustering at both the host and VM levels, the resource can restart regardless of whether the node
that fails is a VM or a host. It’s considered an optimal high-availability configuration for VMs
running mission-critical applications in a production environment.
205
20740 Installation, Storage, and Compute with Windows Server
You should consider several factors when you implement guest clustering:

The application or service must be failover cluster aware. This includes any of the Windows
Server roles that are cluster-aware, and any applications, such as clustered Microsoft SQL
Server and Microsoft Exchange Server.

Hyper-V VMs can use Fibre Channel-based connections to shared storage. Alternatively, you
can implement Internet Small Computer System Interface (iSCSI) connections from the VMs to
the shared storage. You can also use the shared virtual hard disk (VHD) feature to provide
shared storage for VMs.
To enable protection on the network layer, you should deploy multiple network adapters on the
host computers and the VMs. You should also dedicate a private network between the hosts and a
network connection that the client computers use.
Overview of live migration
Windows Server 2022 Hyper-V allows you to move VMs between physical Hyper-V nodes without
the need to shut down the VMs. This process is called live migration, and it can be performed in a
cluster or non-cluster environment. When used within a failover cluster, live migration enables you
to move running VMs from one failover cluster node to another node. If used without a cluster, live
migration performs as a Storage Migration and is called shared-nothing live migration. With live
migration, users who are connected to the VM shouldn’t experience any server outages.
Note: When using the term live migration, we presume that failover clustering on a Hyper-V
host level is implemented. Although live migration can be performed without having a
failover cluster environment, in cases without storage, it’s actually Storage Migration, as this
lesson describes later.
One more example for VM migration without clustering is when you configure a VM so that
it’s stored on an SMB file share. In this deployment, you can run a live migration on the
running VM between non-clustered servers running Hyper-V, while the VM’s storage remains
on the central SMB share.
You can manage and initiate a Hyper-V migration by using Hyper-V settings in Hyper-V Manager.
However, you can’t use Hyper-V Manager to perform a live migration of a clustered (highly
available) VM. You should use the Failover Cluster Manager console, Virtual Machine Manager,
or Windows PowerShell.
You can also choose the authentication protocol that hosts will use for this process. The default
selection is Credential Security Support Provider (CredSSP), but you can also use Kerberos
authentication. When choosing which authentication protocol to use, consider the following:

CredSSP is the default configuration and it’s easy to configure; however, it’s less secure than
Kerberos authentication. To live-migrate VMs, CredSSP requires signing into the source server,
remote desktop, or remote Windows PowerShell session.
206
20740 Installation, Storage, and Compute with Windows Server

Kerberos authentication is more secure, but it requires manual selection and constrained
delegation configuration for each Hyper-V host. On the other hand, it doesn’t require signing
into the Hyper-V host server to perform live migration.
It’s important that you understand what happens in the background when a VM is moved from one
Hyper-V host to another. Although it might seem simple when you perform it, there are a lot of
things happening to maintain VM stability and data consistency, and to avoid downtime.
The live migration process includes the following steps:
1. Migration setup. When you start the failover of the VM, the source Hyper-V cluster node creates
a Transmission Control Protocol (TCP) connection to the target Hyper-V node (the one that
should accept a VM being moved). This connection is used to transfer the VM configuration
data to the target Hyper-V host. Live migration creates a temporary VM on the target Hyper-V
host and allocates memory to the destination VM. The migration preparation also checks to
determine whether a VM can be migrated.
2. Guest-memory transfer. The guest memory is transferred iteratively to the target host while the
VM is still running on the source host. Hyper-V on the source physical host monitors the pages
in the working set. As the system modifies memory pages, it tracks and marks them as being
modified. During this phase, the migrating VM continues to run. Hyper-V iterates the memory
copy process several times, and each time, a smaller number of modified pages are copied to
the destination physical computer. A final memory-copy process copies the remaining modified
memory pages to the destination physical host. Copying stops as soon as the number of dirty
pages is less than a certain threshold, or after 10 iterations are complete.
3. State transfer. To migrate the VM to the target host, Hyper-V stops the source VM, transfers the
state of the VM, including the remaining dirty memory pages, to the target host, and then
restores the VM on the target Hyper-V host. The VM must be paused during the final state
transfer, but this usually takes very little time.
4. Cleanup. The cleanup stage finishes the migration by tearing down the VM on the source host,
terminating the worker threads, and signaling the completion of the migration.
Live migration requirements
As you learned earlier, there are several ways to implement VM migration, both with and without
a clustered environment. However, there are some common requirements for all types of live
migrations, along with requirements for VMs deployed in a cluster and outside cluster.
Common requirements include:

Hosts should support hardware virtualization.

You should use processors from the same manufacturer in each VM host.

Each Hyper-V host should belong either to the same Active Directory domain or to domains that
trust each other.

You should configure VMs to use VHDs or virtual Fibre Channel disks (no physical disks).

You should use an isolated network, physically or through another networking technology such
as virtual local area networks (VLANs), which is recommended for live migration network traffic.
207
20740 Installation, Storage, and Compute with Windows Server
If you deploy VM as a clustered role, you need to ensure that cluster is using Cluster Shared
Volume (CSV) storage and that all Hyper-V hosts have similar hardware and software
configurations.
If you’re using file share storage for VMs, you should make sure that all VM files are stored on a
SMB share that’s been configured to grant access to the computer accounts of all servers that are
running Hyper-V.
If you decide to use Hyper-V hosts outside (or without) a clustered environment, you’ll have
different requirements, including:

A user account with the necessary permissions to perform migration-related steps. This
means that you must have membership in the local Hyper-V Administrators group or the
Administrators group on both the source and destination Hyper-V hosts unless you’re
configuring constrained delegation. Constrained delegation requires membership in the
Domain Administrators group.

Source and destination Hyper-V hosts that either belong to the same Active Directory domain
or to domains that trust each other.

The Hyper-V management tools that are installed on a Windows Server or Windows 10 or
newer computer, unless the tools are installed on the source or destination server from which
you’ll run them.
Demonstration: Configure live migration (optional)
For high-level demonstration steps, refer to the Notes, which are in the accompanying Microsoft
PowerPoint presentation.
Provide high availability with Storage Migration
There are many cases in which an administrator might want to move the VM files to another
location. For example, if the disk where a VM hard disk resides runs out of space, you must move
the VM to another drive or volume. Moving a VM to another host is a common procedure.
In older versions of Windows Server, moving a VM resulted in downtime because the VM had to be
turned off, and you had to perform export and import operations for that specific machine. Export
operations can be time-consuming, depending on the size of the VM hard disks.
VM and Storage Migration enables you to move a VM to another location on the same host or on
another host computer without turning off the VM.
It’s important to understand that the Storage Migration process is more similar to copy, paste, and
delete rather than moving. To copy a VHD, an administrator starts live Storage Migration by using
the Hyper-V console or Windows PowerShell, and completes the Storage Migration Wizard, or
specifies parameters in Windows PowerShell. A new VHD is created on the destination location,
and the copy process starts. During the migration process, the VM is fully functional, although you
might notice a temporary decrease in performance. This is because all changes that occur during
208
20740 Installation, Storage, and Compute with Windows Server
copying are written to both the source and destination locations. Read operations are performed
only from the source location.
When the disk copy process is complete, Hyper-V switches VMs to run on the destination VHD. In
addition, if the VM is moved to another host, the computer configuration is copied, and the VM is
associated with another host. If a failure were to occur on the destination side, you always have a
fail-back option to run on the source directory. After the VM is migrated to and associated with a
new location successfully, the process deletes the source VHD/VHDX files and VM configuration.
The required time to move a VM depends on the source and destination location, the speed of
hard disks or storage, and the size of the VHDs. The moving process is accelerated if the source
and destination locations are on storage, and the storage supports Open Diagnostic eXchange
format (ODX).
When you move a VM’s VHDs/VHDXs and configuration files to another location, a wizard presents
three available options:

Move all the VM’s data to a single location. You specify one single destination location, such as
disk file, configuration, checkpoint, or smart paging.

Move the VM’s data to a different location. You specify individual locations for each VM item.

Move only the VM’s VHD. You move only the VHD file.
Demonstration: Configure Storage Migration (optional)
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
Overview of Hyper-V Replica
Today, most Windows Servers are deployed as VMs that run on virtualization fabric. In most cases
virtualization fabric, usually Hyper-V failover clusters, is limited to one location. When you use
features such as Live Migration, you can move VMs between locations, but only while both
locations and the link between them are available. If disaster occurs, such as a fire, a location
and everything running on it might become unavailable. The only solution to avoid complete data
loss and extensive outage is to make a plan to ensure that all important services, including VMs,
replicate between the locations. Hyper-V Replica can help you with that, as it’s a business
continuity and disaster recovery (BCDR) solution, which replicates VMs between locations. If a
disaster happens, you perform the failover and continue running VMs on the secondary location.
Hyper-V Replica replicates VMs to a secondary Hyper-V host and optionally to a tertiary host. While
VMs on the primary Hyper-V host are running, those on the secondary Hyper-V host must remain
turned off while Hyper-V replication happens to ensure that changes occur only on the primary
location, where they’re written to the log file, compressed, and periodically replicated and applied
to the secondary VM. The compression replication is efficient and happens asynchronously,
209
20740 Installation, Storage, and Compute with Windows Server
without affecting the running primary VM. You can configure the replication interval to 30 seconds,
5 minutes, or 15 minutes. Figure 1 depicts the replication process between sites:
Figure 18: Hyper-V Replica diagram
Hyper-V Replica is a storage- and workload-agnostic solution that can replicate VMs and virtual
disks from, and to, any supported storage, regardless of whether it’s DAS, NAS, or a SAN. For
example, you can store your primary VM on failover cluster SAN storage, and then store your
replica on a locally attached solid-state drive (SSD). Hyper-V Replica doesn’t care about, and isn’t
aware of, workloads that are running in the VM it’s replicating.
Note: Hyper-V Replica can replicate only virtual disks. Pass-through disks aren’t supported,
and Hyper-V Replica can’t replicate them.
Hyper-V Replica doesn’t perform automatic failover if disaster or failure occurs. In this case,
replication stops, but VMs at the secondary location remain turned off. When necessary, you
can perform failover manually, which stops the replication and ensures that you can start the
secondary location’s VMs. If the VMs are replicating to Azure, automate the process by using
Azure Site Recovery.
Prerequisites for Hyper-V Replica
The Hyper-V server role includes Hyper-V Replica, which doesn’t have many prerequisites.
Depending on your infrastructure, you should consider that:

Hyper-V Replica requires Windows Server. Hyper-V hypervisor is included in Windows Client
and Windows Server, but advanced features, such as Live Migration and Hyper-V Replica, are
available only on Windows Server. Hyper-V Replica is available in all Windows Server editions
(Standard and Datacenter), in Server Core, and in Desktop Experience.
210
20740 Installation, Storage, and Compute with Windows Server

You can establish Hyper-V Replica between any two Hyper-V hosts. Those hosts can be in the
same Active Directory domain, different domains in the same forest, or in different forests. If
hosts are in the same forest, you can use either Kerberos or certificate-based authentication.
If the hosts are in a nontrusting domain, you must use certificate-based authentication. By
default, Hyper-V can replicate VMs between any two Hyper-V hosts. However, if necessary, you
can limit the secondary Hyper-V host from which hosts it can replicate VMs.

Hyper-V hosts must have sufficient resources. This is always true, but Hyper-V Replica
needs additional storage for log files on the primary Hyper-V host and replicated VMs on
the secondary host. Replicated VMs are turned off, so you don’t need additional compute
resources on the secondary host while replication occurs. However, when you perform a
failover and want to start replicated VMs, you’ll need additional compute power on the
secondary host.
Important: For replicated VMs, all changes that happen in the VM are first written to a log
on the primary Hyper-V host. After the log replicates, it’s automatically deleted. However, if
network connectivity is interrupted and the log can’t replicate, it can take all available
space on the primary host. Therefore, it’s important to always monitor for available disk
space on the primary Hyper-V host.

Hyper-V Replication requires network connectivity. Replication is always between two Hyper-V
hosts, which must have network connectivity. Hyper-V Replica can replicate across almost any
network connection, even if the connection has short interruptions. Hyper-V replication is
asynchronous and can compress changes before the replication. If network connectivity is
interrupted, Hyper-V Replica stops replication and then continues, without any data loss, after
connectivity is restored.

Windows Defender Firewall must allow incoming Hyper-V Replica traffic. When you install the
Hyper-V role, Windows Defender Firewall rules for Hyper-V replication are automatically added,
but they’re turned off. If you want to use Hyper-V Replica, you must turn on Hyper-V Replica
listener inbound rules: HTTP if you’re using Kerberos authentication, and HTTPS if you’re using
certificate-based authentication. You can turn on the rule(s) in the Windows Defender Firewall
with Advanced Security console.
211
20740 Installation, Storage, and Compute with Windows Server
Plan for Hyper-V Replica
You can implement Hyper-V Replica between two virtualization fabrics, regardless of whether you
implement the virtualization fabric as a standalone Hyper-V host or a Hyper-V cluster with multiple
Hyper-V nodes. Besides storage and compute capacity, which is transparent to the replica, the
only other difference is that, in the failover cluster, you must add the Hyper-V Replica Broker
cluster role.
Hyper-V Replica scenarios include:

Both virtualization fabrics are standalone Hyper-V hosts. We don’t recommend this scenario in
a production environment because the virtualization fabric should provide high availability.
Replicating VMs between standalone Hyper-V hosts has the least requirements and is easiest
to implement.

The primary virtualization fabric is a failover cluster, and the secondary is a standalone Hyper-V
host. This is a common implementation. The primary fabric is running production VMs, which
are highly available, as they run on failover cluster. While the VMs are running, their copy is
constantly updating on the secondary fabric, standalone Hyper-V host. This fabric is less
powerful and doesn’t provide high availability. However, VMs aren’t running there. Rather,
they’re just stored there, essentially an offsite backup. If disaster happens at the primary site,
you can fail over and start critical VMs on the secondary fabric or wait and then replicate the
VMs to the new hardware in the recovered primary location.

The primary and secondary virtualization fabrics are a Hyper-V failover cluster. This scenario
is similar to the previous one, but in this case, the secondary virtualization fabric is highly
available and has considerable storage and compute power. In this scenario, if the primary
location fails, you can fail over and run VMs on the secondary location. This can last for days,
and after the primary location is recovered, you can reverse the replication and start running
VMs on the primary location again.

The primary virtualization fabric is a standalone Hyper-V host, and the secondary is a failover
cluster. This scenario is rare because the primary location doesn’t provide high availability.
However, sometimes, such an implementation makes sense, such as if the primary location is
a small branch office and the secondary location is the main office to which VMs are replicated
from many branch offices. Another example is if a small company is replicating their VMs to
their internet service provider (ISP) and considers this as the offsite backup.
212
20740 Installation, Storage, and Compute with Windows Server
Figure 2 depicts these scenarios:
Figure 19: Scenarios in which you can implement Hyper-V Replica functionality
Hyper-V replication settings
There are two levels at which you must configure Hyper-V Replication. You must first allow and
configure the replication at the Hyper-V host level, on the server to which you want to replicate
VMs. Secondly, you must configure the replication on each VM on the primary (or source) Hyper-V
host. You can replicate multiple VMs to the same or to a different Hyper-V host, but each VM
replicates independently.
You can configure replication on a VM by using Windows PowerShell (Enable-VMReplication
cmdlet) or the Enable Replication Wizard in Hyper-V Manager. To use the wizard, right-click or
access the context menu to open a VM in Hyper-V Manager, select Enable Replication, and then
configure the following settings:

Replica Server. This is the name or fully qualified domain name (FQDN) of the Hyper-V host that
will host the secondary copy of the VM. You can’t enter an IP address and the system must be
able to resolve the name entered. If the Hyper-V host that you specify doesn’t allow incoming
Hyper-V replication, you can update the configuration in this step. If the Replica server is a
node in a failover cluster, enter the name or FQDN of the connection point for the Hyper-V
Replica Broker.
213
20740 Installation, Storage, and Compute with Windows Server

Connection Parameters. If the Replica server can be contacted, the authentication type and
replica server port are already provided and can’t be modified. If the Replica server is
inaccessible, you can configure these values manually. However, you should be aware that
you won't be able to enable the replication if you can't create a connection to the Replica
server. On this wizard page, you can also specify that Hyper-V can compress replication data
before transmitting it over a network, which can considerably decrease replication traffic.

Replication VHDs. When you enable replication for a VM, by default, replication includes all
VHDs of that VM. If some of the VHDs aren’t necessary in the secondary location, you can
exclude them from replication. We recommend that the VM has a dedicated VHD for storing
the page file, which is excluded from the replication. If you exclude VHD with OS, you won’t be
able to start the VM at the secondary location, but copies of other VHDs will be stored on the
Replica server.

Replication Frequency. You configure how often changes in the VM replicate to the Replica
server. You can select one of three values: 30 seconds, 5 minutes, and 15 minutes. A shorter
time interval means that changes replicate more often, which results in more network traffic,
but also with a smaller state delay between the primary and secondary VM. Replication
frequency controls how often data replicates to the Replica server. If a disaster occurs at a
primary site, a shorter replication frequency means less loss as fewer changes aren't
replicated to the secondary site. After the replication is established, you can use View
Replication Health in Hyper-V Manager to monitor the delay between the primary and
secondary VMs.

Additional recovery points. By default, the replica VM maintains only the latest recovery
point. Changes replicate from the primary VM and then are applied at the replica based on
replication frequency. After changes are applied, the previous state is no longer available. For
example, let’s imagine that the VM includes a file named File1, which is also in the replica. At
some point, you mistakenly deleted File1 in the primary VM. Information about the deletion
quickly replicates and File1 is deleted in the replica. If you configure creation of additional
hourly recovery points at the replica, you could revert the replica to one of the earlier states,
when File1 wasn’t yet deleted, and then restore the file.

Initial replication method and initial schedule. Before Hyper-V Replica can replicate changes,
an exact copy of the VM must exist at the Replica server. Initial replication ensures this, and
you can perform it immediately over the network. However, this causes many gigabytes of
network traffic, because VMs have large virtual disks. You can instead export the VM and then
import it to the Replica server or use the already restored copy of the VM at the Replica server.
You can also schedule the initial replication to happen over the network, but outside business
hours, when network utilization is lower.

Extended replication. The option to extend replication is available only on the replica VM. This
enables you to replicate the secondary VM to the third location. If you use this option, be aware
that the primary VM still replicates only to the secondary one, but the secondary VM then
replicates to the tertiary VM. You can extend replication by using the same Enable Replication
Wizard. The only difference is that the replication frequency of 30 seconds isn’t available, and
you can’t use VSS for creating additional recovery points.
214
20740 Installation, Storage, and Compute with Windows Server
Figure 3 depicts the Enable Replication Wizard’s summary page:
Figure 20: Enabling Hyper-V replication for a VM
Additional settings on the replicated VM
After you enable replication on the VM, there are several additional settings:

Failover TCP/IP. When you initiate failover, the secondary VM is started in the secondary site,
which could be connected to a different network and use a different IP address space. If the
OS in the VM is configured to obtain IP settings dynamically, it’s assumed that it’ll obtain
correct TCP/IP settings from the DHCP server in the secondary site. However, if it’s configured
with static TCP/IP settings, you can configure which TCP/IP settings are used during failover.

Test Failover. This setting is available only on the replica and defines which virtual-switch
replica is connected during test failover. This setting is used only during test failover. By
default, the VM isn’t connected to any virtual switch.

Replication. These settings can be edited on the primary VM and are read-only on the replica.
You can review or update settings that were configured when you initially set up the replication.
215
20740 Installation, Storage, and Compute with Windows Server
Figure 4 depicts these settings:
Figure 21: Additional settings on the replicated VM
Implement Hyper-V Replica
Hyper-V Replica is part of the Hyper-V role on Windows Server, so you don’t need to install any
additional features. You can implement Hyper-V Replica on a standalone Hyper-V host and in the
failover cluster. Hyper-V Replica doesn’t have any dependency on Active Directory, and on the
standalone Hyper-V host, it doesn’t need any additional role. If you want Hyper-V Replica to
replicate VMs to or from a failover cluster, Hyper-V Replica is dependent on the Hyper-V Replica
Broker cluster role, which ensures that replicated traffic reaches the cluster node that has the
replicated VM.
216
20740 Installation, Storage, and Compute with Windows Server
Configure Hyper-V Replica on a Hyper-V host
This is a one-time configuration that you must perform on the Hyper-V host to which you want to
replicate the VM (replica Hyper-V host, host in the secondary location). This configuration isn’t
required on a Hyper-V host from which you want to replicate the VM (primary Hyper-V host, host in
the primary location).
Note: When you perform failover, the secondary Hyper-V host becomes primary. In this
situation, you probably want to revert replication. It’s common to configure both Hyper-V
hosts, primary and secondary, to allow for incoming Hyper-V replication.
On the secondary Hyper-V host, you must enable the computer as a Replica server, which can be
done in the Hyper-V Manager or by using Windows PowerShell. You also must configure it to allow
incoming Hyper-V Replica traffic in the Windows Defender Firewall. When you enable a computer
as a Replica server, you need to configure authentication and whether the Replica server can
receive replicas only from specific hosts or from any Hyper-V hosts.
You can enable incoming Hyper-V Replica traffic in Windows Defender Firewall with the Advanced
Security console. Both incoming rules, Hyper-V Replica HTTP Listener (TCP in) and Hyper-V Replica
HTTPS Listener (TCP in), are created when the Hyper-V role is installed. However, they’re disabled
by default. Depending on which authentication Hyper-V Replica is using, Kerberos or certificatebased authentication, you should enable either HTTP or HTTPS listener.
Figure 5 depicts the notification you receive after enabling Hyper-V Replica on a Hyper-V host:
Figure 22: Notification to allow inbound Hyper-V Replica replication traffic on the replica
Enable replication on a VM
After you have at least one Replica server configured, you can start enabling replication on
individual VMs. You can enable replication in Hyper-V Manager, by using the Enable Replication
Wizard, or by using Windows PowerShell. In the wizard, you define replication parameters, which
you can later review and modify in a VM’s settings. Those settings are read-only on the replica, but
you can edit them on the primary VM.
217
20740 Installation, Storage, and Compute with Windows Server
Note: You can’t enable Hyper-V Replica by using Windows Admin Center.
As part of the replication configuration, you need to define how it’ll occur in the initial replication. If
there’s sufficient network bandwidth available, for example on a 100-gigabits per second (Gbps)
LAN, you can perform the initial replication over the network. Otherwise, you can schedule the
replication to occur during lower network utilization or export the VM and manually import it at the
Replica server. While initial replication over a network happens automatically, importing a VM
requires manual interaction.
Monitor Hyper-V replication
After you enable VM replication, changes in the primary VM are written at two places: in its virtual
disks and a log file. The log file periodically replicates to the Replica server, where it’s applied to
the replica VM VHDs. Replication Health monitors the replication process and displays important
events in addition to the replication and sync state of the Hyper-V host.
Replication Health includes the following information:

Replication State. Depicts if replication is enabled for a VM.

Replication Mode. Depicts whether you’re monitoring replication health on a primary or
replica VM.

Current Primary and Current Replica Server. Depicts which Hyper-V host is running on the
primary VM and to which Hyper-V host the VM replicates.

Replication Health. Depicts replication health status, which can be Normal, Warning, or Critical.

Replication statistics. Displays replication statistics since the VM replication started or since
you reset the statistics. Statistics include data such as maximum and average replication
sizes, average replication latency, number of errors encountered, and the number of
successful replication cycles.

Pending replication. Displays how much data still needs to replicate and when the replica was
last synced with the primary VM.
218
20740 Installation, Storage, and Compute with Windows Server
Figure 6 depicts these Replication Health settings:
Figure 23: Replication health for the replicating VM
Hyper-V Replica failover options
After initial synchronization finishes, Hyper-V Replica supports three different failover options that
you can use in different scenarios, including:

Test failover. This is the only option that’s nondisruptive and doesn’t result in downtime.
As its name implies, it’s intended to test failover. You can start test failover on the replica,
which creates a new checkpoint, and there’s no interruption in Hyper-V replication. From this
checkpoint, a new VM is automatically created with the same name as the replica, but with
“- Test” appended at the end. While the replica can’t be started, this new VM can be, and is,
219
20740 Installation, Storage, and Compute with Windows Server
exactly the same as the replica until the moment when you triggered test failover. The test VM
isn’t connected to the network by default to avoid potential conflicts, and you can configure
this by using the Test Failover setting on the network adapter of the replica VM. After you finish
testing, you can stop test failover, which automatically deletes the test VM. If you run a test
failover on a failover cluster, you'll have to manually remove the test VM. Figure 7 depicts the
Test Failover dialog box:
Figure 24: Test Failover creates an additional VM on the Replica server

Planned failover. You can start planned failover on a primary VM, which must be turned off, to
move it to a replica site. An example is before site maintenance or an expected disaster
occurs. This is a planned event, so no data loss should occur, but the VM is unavailable for a
specific period during the failover. During the planned failover, the primary Hyper-V host sends
all VM changes that haven’t yet replicated to the Replica server. The VM then fails over to the
Replica server, where it’s automatically started. After the planned failover, the VM is running
on the Replica server and doesn't replicate its changes. If you want to set up replication again,
reverse the replication. Figure 8 depicts the Planned Failover dialog box:
Figure 25: Planned failover replicates all updates before the failover occurs
220
20740 Installation, Storage, and Compute with Windows Server

Failover. In the event of a disaster, or when something unexpected happens at the primary
location, you can perform a failover. You can start a failover on the replica, but only if the
primary VM is either unavailable or turned off. A failover is an unplanned event, which has
downtime and can result in data loss, because there could be VM changes in the primary site
that haven’t yet replicated. Figure 9 depicts the Failover dialog box:
Figure 26: Failover is unplanned and causes loss of updates that haven't replicated
Configuration options for replication
Besides performing various types of failovers, you can configure several additional replication
options, which include the following settings:

Pause Replication. Stops replication for the selected VMs.

Resume Replication. Resumes replication for the selected VMs. This option is only available if
VM replication is paused.

Remove Recovery Points. Is available only during a failover. It deletes all recovery points
(checkpoints) of the replica.

Remove Replication. Stops replication and removes the replication relationship. The replica VM
isn’t deleted, but changes are no longer replicated.
Lesson 3: Network Load Balancing overview
Network Load Balancing (NLB) is a high-availability technology for stateless applications that don’t
require shared storage and aren’t supported with failover clustering. In this lesson, you’ll learn how
to deploy and use NLB with Windows Server.
221
20740 Installation, Storage, and Compute with Windows Server
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe NLB.

Describe deployment requirements and configuration options for NLB.
What is Network Load Balancing?
You can use NLB with both physical server computers and virtualized server workloads. When you
deploy NLB in VMs, it works in the same manner that it works with physical hosts. It distributes IP
traffic to multiple instances of a web-based service, such as a web server that’s running on a host
within the NLB cluster. NLB transparently distributes client requests among the hosts, and it
enables the clients to access the cluster by using a virtual host name or a virtual IP address. From
the client computer’s perspective, the cluster is a single server that answers these client requests.
As enterprise traffic increases, you can add another server to the cluster.
Therefore, NLB is an appropriate solution for resources that don’t have to accommodate exclusive
read or write requests. Examples of NLB-appropriate applications include web-based front ends to
database applications or Exchange Server Client Access Servers.
Note: You can’t use NLB to make VMs highly available. You can only use it within VMs for
applications running on VMs, similar to guest clustering.
How does NLB work?
When a client computer establishes a connection to an app running on a node in an NLB cluster, it
addresses the NLB cluster address rather than the node address in the cluster.
Note: The NLB cluster address is a virtual address that hosts in the NLB cluster share.
NLB directs traffic in the following manner:

All hosts receive the incoming traffic. Although all hosts in the NLB cluster receive the incoming
traffic, only one node accepts that traffic. All other nodes in the NLB cluster drop the traffic. An
NLB process determines which node accepts the traffic.

A process in the NLB cluster determines the accepting node. This process is dependent on the
cluster configuration, including defined port rules and affinity settings. By reviewing these rules
and settings, you can determine whether a specific node accepts traffic, or if any node can
accept traffic.
NLB can also direct traffic to nodes based on current node use. For example, it directs new traffic
to the least-used nodes. This helps balance traffic and optimize throughput to and from your
clustered app.
222
20740 Installation, Storage, and Compute with Windows Server
Deployment requirements for NLB
To set up and configure an NLB cluster, your infrastructure must satisfy the following
requirements. Try to ensure that:

Cluster nodes are all deployed in the same subnet. If the latency between nodes exceeds 250
milliseconds, you’re unlikely to achieve optimal performance.

You configure all network adapters in your cluster as either all unicast or all multicast. You
can’t configure an NLB cluster with a mixture of both unicast and multicast adapters.

You use only the TCP/IP protocol. Don’t add any other protocols to an adapter that’s part of an
NLB cluster.
Tip: NLB supports both IPv4 and IPv6.

The IP addresses of servers in an NLB cluster are static. In other words, avoid DHCP-assigned
addresses for cluster nodes.

All server computers are:
o
o
Running the same edition of Windows Server.
Have similar hardware specifications.
Configuration options for NLB
By configuring NLB clusters, you can define how hosts in the cluster respond to incoming network
traffic. For example, NLB directs traffic depending on:

The port and protocol of the traffic.

Whether the client has an existing network session with a host in the cluster.
You can configure these settings with port rules and affinity.
Port rules
By implementing port rules, you control the flow of traffic in the NLB cluster. For example, you
could define a port rule that directed TCP port 443 traffic to all nodes in the cluster, while defining
that TCP port 25 traffic is directed to a specific node in the cluster. You filter traffic in this way by
using one of two filtering modes:

Multiple hosts. When selected, all NLB nodes respond according to the weight assigned to
each node. Multiple host filtering increases availability and scalability because you can
increase capacity by adding nodes, and the cluster continues to function in the event of
node failure.
223
20740 Installation, Storage, and Compute with Windows Server
Note: Node weight is calculated automatically, based on the performance characteristics
of the host.

Single host. In this mode, the NLB cluster directs traffic to the node that’s assigned the highest
priority. Although single host rules increase availability, they don’t increase scalability.

Disable this port range. When selected, all packets for the defined port range are dropped.
Affinity
Affinity defines how the NLB cluster distributes requests from a particular client.
Tip: Affinity settings only apply to the multiple hosts filtering mode.
The following affinity settings are available:

None. When selected, any cluster node responds to any client request. This mode is
appropriate for stateless apps.

Single. When selected, a single node manages all requests from a specific client. This mode
is useful for stateful applications.

Network. When selected, a single node responds to all requests from a class C network. This
mode is useful for stateful apps where the client is accessing the NLB cluster through loadbalanced proxy servers.
Lesson 4: Back up and restore with
Windows Server Backup
Windows Server Backup is a Windows Server app that utilizes VSS for creating a backup. It can
create a backup of a whole volume, folder, or individual files, and a backup of the Windows Server
system state or a running VM on a Hyper-V host.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe Windows Server Backup.

Implement backup and restore by using Windows Server Backup.
Overview of Windows Server Backup
If you need to create a backup copy of a single file, you can simply copy the file to an alternate
location, such as a network share or removable media. However, you might want to create a
backup copy of a folder that contains many files or of a volume that has several terabytes (TBs) of
data, where many files are constantly updated, and some files are exclusively opened most of the
224
20740 Installation, Storage, and Compute with Windows Server
time. While a file is exclusively opened, other processes can’t access or copy the file. Therefore,
you can’t create a consistent backup simply by copying the volume content. It would take a long
time, files would be modified during the copy operation, and some files would potentially be
skipped. Figure 10 depicts why copying files differs from performing a backup:
Figure 27: Copying a large number of files to an alternate location doesn't replace a backup
Volume Shadow Copy Service
To address such challenges and to create a consistent backup, the Windows OS includes VSS,
which creates a consistent and quick snapshot of an entire volume. You then can copy or back
up a snapshot, while the OS and apps can access and modify data on the volume without affecting
snapshot data. A snapshot is read-only and includes all the data that was on the volume when
the snapshot was taken. This could include Windows system files, apps and their data, and
opened files.
Backup and restore operations require coordination between the backup app, the line-of-business
apps that are being backed up, and the storage-management hardware and software. VSS
performs coordination between the following components:

VSS requestor. This is a backup app that requests the creation of a volume snapshot. After the
snapshot is created, it ensures that the content of the snapshot is copied to the backup
destination. Windows Server Backup, Azure Backup agent, and other third-party Windows
backup apps are examples of VSS requestors.

VSS writer. A VSS writer is an app-specific component that communicates with VSS and with
the app, such as SQL Server or Hyper-V. It can read the app configuration and is aware where
the app data is written. A VSS writer ensures that all app data and state is flushed from
memory to the disk before a volume snapshot is created.

VSS provider. This component creates a snapshot and maintains the shadow-copy area while
it’s needed. In most cases, you can obtain a VSS provider from your storage vendor. If no other
VSS provider is available, Windows uses a copy-on-write VSS provider, which is included in
the OS.
225
20740 Installation, Storage, and Compute with Windows Server
Figure 11 depicts the Volume Shadow Copy Service infrastructure:
Figure 28: Volume Shadow Copy Service infrastructure in Windows
When a VSS requester, such as the Windows Server Backup, wants to create a backup, it calls the
VSS service, which notifies VSS writers, such as a Hyper-V VSS writer, that a snapshot will be
created shortly and that it should copy all uncommitted changes from memory to a disk. Each VSS
writer is app-specific and ensures that all app changes that are currently only in memory are
written to data files on the disk. By doing this, it ensures that a complete and consistent app state
at that point in time is written on the disk and can be included in the snapshot. The app is then
prevented from writing changes to the disk for a few moments, during which the VSS provider
creates the snapshot. After the snapshot is created, the app can continue reading and writing to
the disk, while a VSS requestor creates a consistent copy, called backup, of the snapshot.
Windows Server Backup
Windows Server Backup is a Windows Server feature that utilizes VSS for creating a backup. It’s a
VSS requester that you can manage by using the Windows Server Backup console, wbadmin.exe
command-line tool, or the WindowsServerBackup PowerShell module.
With Windows Server Backup, you can perform a backup and restore of the following components:

Full server backup. This includes all server data, installed apps, and system state. It requires
the most space, but it can be used to restore an entire server on the new hardware, if
necessary.

Whole volume or individual folders and files. You can create a backup copy of the entire
volume or you can be more specific and select only individual folders or files.
226
20740 Installation, Storage, and Compute with Windows Server

System state. System-state backup includes OS, startup, and registry files, and other rolespecific files. If you restore the system state, the OS has the same name, domain membership,
and roles installed as when you backed up the system state.

Bare-metal recovery. Includes OS files and data and all data on other critical volumes. For
example, a bare-metal recovery always includes system state and system volume (C).

Individual VMs and the host component. On the Hyper-V host, Windows Server Backup can also
back up individual VMs while they’re running and without any downtime for them. If the OS in
VM doesn’t include VSS, the VM is paused while a volume snapshot is created.
Figure 12 depicts the Windows Server Backup dialog box:
Figure 29: Items that can be backed up by Windows Server Backup
Important: Windows Server Backup doesn’t use backup agents, and it’s a single-server
backup solution. If you need to back up multiple servers, you need to install the Windows
Server Backup feature on every server. Because of this, in larger environments, we
recommend that you use System Center Data Protection Manager, Azure Backup Server,
or a third-party enterprise backup solution.
227
20740 Installation, Storage, and Compute with Windows Server
Implement backup and restore
Windows Server Backup is a general-purpose backup program that can back up data and
workloads on Windows Server. It’s a single-server solution that doesn’t use agents and can
back up only the server on which the Windows Server Backup feature is installed. In larger
environments, you’ll probably use a different backup solution, but the basic principle of how the
backup app works is the same, because all backup solutions on Windows Server use Volume
Shadow Copy service for performing backups.
By using the Windows Server Backup console, you can run backup once or multiple times by
creating a backup schedule. When configuring a backup, you need to define the following settings:

Backup Options. Specify if you want to use the same backup settings as are defined for the
scheduled backup or configure different options. The option to use the same settings is
available only if you’ve already configured a scheduled backup. If you want to use the same
settings, you must confirm them and then run the backup. If you select different options, you
need to configure them.

Select Backup Configuration. Select whether you want to perform a full server backup or a
custom backup configuration. For the latter, you can add or remove individual components to
back up (bare-metal recovery, system state, and individual volumes, folders, and files). You can
also configure advanced configuration, such as what should be excluded from the backup and
VSS settings.

Specify Backup Time. Define if you want to perform a backup once per day or multiple times
per day and at which times the backup should occur. Backups can occur as often as every
30 minutes.

Specify Destination Drive. Define where the backup should be stored. You can store it on a
local drive or on a remote shared folder. If you select a remote shared folder, you must specify
whether the file permissions should be inherited from the parent folder.
For every Windows Server, you should always follow the basic backup principle and include in the
backup its configuration and all important data. For data you consider unimportant, you can omit it
from the backup. However, for everything else, you should ensure that data is backed up and that
you can restore it when you need it.
Depending on the Windows Server workload that you need to back up, the procedures and options
might vary slightly. Some of the most common workloads that you should consider include file and
web servers, domain controllers (DC), Microsoft Exchange Server, and Hyper-V hosts.
Back up file and web servers
You should always define and document the information you deem important for a Windows Server
and where it’s stored. In many cases, this will include server name, domain membership, and
configuration, such as which folders are shared and what permissions are set on the shares. The
server’s hostname typically isn’t important, but it’s critical how the web server and each individual
website on the server are configured. This configuration information is stored in the server.config
228
20740 Installation, Storage, and Compute with Windows Server
and web.config files. You should also regularly back up data files, which are files stored in shared
folders on file servers and website content on web servers.
Another important consideration is when and how often backups should occur. The answer
depends on the amount and frequency of changes being made. If web-server content rarely
changes, you don’t need to back up that content often. However, conversely, you probably
want to back up file servers daily, as users frequently modify files on a file server.
Back up DCs
On your network, you likely have multiple DCs, which means that you already have multiple copies
of your Active Directory Domain Services (AD DS) database. However, it’s important to remember
that this doesn’t mean you don’t need to perform regular backups. When Active Directory is
modified on one DC, the update is almost immediately replicated to other DCs, and the same
update is automatically applied to their copy of the AD DS database.
Although you can enable the Active Directory Recycle Bin (AD Recycle Bin), this feature only
enables you to restore deleted objects. You can’t restore other changes from the AD Recycle Bin.
Therefore, backing up the AD DS role should be an important part of any backup-and-recovery
strategy. You don’t need to back up the AD DS database on every DC, but you should regularly
perform the backup on at least one DC.
Note: To back up only files required to recover AD DS, you can perform a system-state
backup or a critical-volume (C:) backup on a DC.
When you back up AD DS, consider your backup schedule. It’s important to remember that you
can’t restore an AD DS backup that’s older than the tombstone lifetime, which is 180 days by
default. You should also be familiar with the process of authoritative AD DS restore and how to
perform it.
Back up other workloads
You should be aware that many Windows Server workloads, such as SQL Server, Microsoft
Exchange, or Hyper-V, provide their own VSS writer. This enables Windows Server Backup to be
aware of those workloads, and you then can perform workload-specific backups, such as of an
individual VM, SQL database, or Exchange data.
Note: Exchange includes a plug-in for Windows Server Backup, WSBExchange.exe, that
allows you to make VSS-based backups of Exchange data.
229
20740 Installation, Storage, and Compute with Windows Server
Lesson 5: High availability with failover
clustering in Windows Server
Failover clustering is one of the most important features of Windows Server that enables you to
implement high availability for critical applications and services. For proper implementation of the
Failover Clustering feature, you need to plan its implementation, understand key components and
terminology, and be aware of important considerations. In this lesson, you’ll learn the basics of the
Failover Clustering feature, and how to plan its deployment.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe failover clustering and how it’s used for high availability.

Describe clustering terminology and roles.

Describe clustering components.
What is failover clustering?
Failover Clustering is a Windows Server role that enables Windows Server computers to provide
high availability for critical applications and services. With this feature, you create failover clusters
as objects that represent highly available solutions in your organization.
A failover cluster is a group of independent, physical, or virtual servers that work together to
provide high availability and scalability of clustered roles. Previously, roles in a cluster were called
clustered applications and services. Clustered servers, which we usually refer to as cluster nodes,
are connected to the same physical network, and by software components provided by the Failover
Clustering feature. If one or more cluster nodes fail, other nodes begin to provide service in a
process known as failover. The clustered roles are proactively monitored by Failover Clustering
software to verify that they’re working properly. If they’re not working, they’re restarted, or the role
is moved to another node.
Failover clusters also provide CSV functionality. This functionality provides a consistent, distributed
namespace that clustered roles use to access shared storage from all nodes. By using failover
clustering, users experience a minimum of disruptions in service in the case of failure of one or
more nodes.
Some of the most common applications of failover clustering include:

Highly available or continuously available file share storage for applications such as Microsoft
SQL Server and Hyper-V VMs.

Highly available clustered roles that run on physical servers or on VMs that are installed on
servers running Hyper-V.

Highly available databases running on SQL Server.
Microsoft constantly improves the Failover Clustering feature in each version of Windows Server.
230
20740 Installation, Storage, and Compute with Windows Server
High availability with failover clustering
Before describing how to implement failover clustering, it’s important to understand the term
availability.
Availability refers to a level of service that applications, services, or systems provide.
Usually, availability is the percentage of time that a service, application, or system is available.
The higher this percentage, the more available is the system, and downtime tends to be very small.
A system that’s highly available should have minimal downtime, both planned and unplanned. For
example, a system with a 99.99 percent availability rating should be unavailable a maximum of
0.876 hours per year (which equals an estimated 52 minutes).
It’s not easy to achieve a level of availability as high as 99.99 percent, so most organizations tend
to have availability of at least 99 percent per year. Availability doesn’t depend only on the roles
being in a cluster. Many other factors, such as electricity, internet connection, air conditioning,
and others can affect availability, so it’s important that you don’t focus only on failover cluster
implementation. However, describing other factors in more detail is beyond the scope of this
course. In general, you should try to avoid single points of failure as much as possible, which
means that each component that can affect system availability should be redundant.
In general, achieving high availability usually results in greater costs and investments in IT
environments. This can be expressed in one-time costs and in continued yearly support costs.
Because of this, implementation of highly available services isn’t just technical but also a business
decision. It’s crucial that you have the support of business decision makers when planning high
availability in your organization or for your client. Incorrect or poorly understood service-level
expectations can result in poor business decisions and customer dissatisfaction.
One of the common misunderstandings when it comes to high availability is the availability
measurement period. Usually, availability is measured on a per-year basis. For example, if a
system is available 99.9 percent per year, it means that downtime should be 8.75 hours for the
entire year. However, you can have unplanned downtime in a single month that lasts for 4 hours.
On a monthly basis, for that particular month, this lowers availability to 99.5 percent, but that still
doesn’t mean that you’ll break your yearly limit.
When you estimate the percentage of high availability, it’s important to consider planned
downtimes needed for updates, maintenance, or similar activities. You should never make
estimations based only on unplanned events.
231
20740 Installation, Storage, and Compute with Windows Server
Clustering terminology and key components
Many components are included in a highly available solution. When it comes to failover clustering,
it’s important that you understand key terms, definitions, and components being used.
Failover Clustering relies on infrastructure components. Table 1 lists these components:
Table 16: Infrastructure components
Term
Definition
Cluster node
A node is an individual server in a cluster. A node is also called a cluster
member. Each node in a cluster runs a Cluster service and other resources
and applications associated with a cluster.
Active cluster node
An active node has a cluster resource currently running on it. A resource or
resource group can only be active on one node at a time.
Passive cluster node
A passive node is a node that doesn’t have a cluster currently running on it.
Cluster resource
A cluster resource is a hardware or software component that a cluster node
hosts. It can be a disk, virtual name, or Internet Protocol (IP) address. The
Cluster service manages the resource and can start, stop, and move to
another node.
Cluster sets
Cluster sets enable VM fluidity across member clusters within a cluster set
and a unified storage namespace across the set.
Cluster network
A cluster network is a network that serves as communication between cluster
nodes and for clients who access the cluster. Each cluster node needs two
network adapters: one for the public network and one for the private network.
The public network is connected to a LAN or a WAN. The private network
exists between nodes and is used for internal network communication, which
is called the heartbeat.
Cluster storage
Cluster storage is a storage system that cluster nodes share. It usually refers
to logical devices—typically drives or logical unit numbers (LUNs) that all the
cluster nodes attach to through a shared bus. Some scenarios, such as
clusters of servers that run Microsoft Exchange Server, don’t require shared
storage.
Cluster resource group
A cluster resource group is a single unit within a cluster that contains cluster
resources. A resource group is also called an application and service group.
232
20740 Installation, Storage, and Compute with Windows Server
Term
Definition
Virtual server
A virtual server consists of the network name and IP address to which clients
are connected. A client can connect to a virtual server, which is hosted in the
cluster environment, without knowing the details of the server nodes.
One of the main processes that happen within a cluster is a failover. Failover, in general, means
transferring a role (or roles) from one cluster node to another in the case of node failure or
planned node downtime. There are many components that participate in the failover process. It’s
important that you understand the key components and terms of cluster failover, as Table 2 lists:
Table 17: Cluster failover terms and components
Term
Definition
Cluster quorum
The cluster quorum defines the minimum number of cluster nodes (and
optionally, other components) required for the cluster to work. By doing
that, cluster quorum maintains the definitive cluster configuration data
and the current state of each node. It also maintains each service,
application group, and resource network in the cluster.
CSV
CSV is a technology that enables multiple cluster nodes to share a
single LUN concurrently. Each node obtains exclusive access to
individual files on the LUN instead of the entire LUN. In other words,
CSVs provide a distributed file access solution so that multiple nodes in
the cluster can simultaneously access the same New Technology File
System (NTFS). CSVs in Windows Server 2022 provide support for a
read cache, which can significantly improve performance in certain
scenarios. Additionally, a CSV file system can perform the chkdsk
command without affecting applications with open handles on the file
system.
Shared disk
Almost every cluster configuration requires storage that’s accessible to
all nodes in a cluster. This is a shared disk. Data that’s stored on a
shared disk is accessible only by the nodes in the system.
Cluster witness
A witness is used to maintain a quorum. It’s recommended that a
witness is on a network that’s both logically and physically separate
from those that the failover cluster uses. However, a witness must
remain accessible by all cluster node members.
233
20740 Installation, Storage, and Compute with Windows Server
Term
Definition
Witness disk or file share
witness
The cluster witness disk or the witness file share are witness resources
used to maintain the cluster quorum and store the cluster
configuration information (disk only). They help to determine the state
of a cluster when some or all of the cluster nodes can’t be contacted.
Azure Cloud Witness
In Windows Server, you can use a Microsoft Azure Cloud Witness share
to create a quorum. In older Windows Server versions, when creating a
stretched cluster, a third offsite quorum was recommended. With
Windows Server 2022, you can create an Azure Cloud Witness instead.
Heartbeat
The heartbeat is a health check mechanism of the cluster, where a
single User Datagram Protocol (UDP) packet is sent to all nodes in the
cluster through a private network to check whether all nodes in the
cluster are online. One heartbeat is sent every second. By default, the
Cluster service will wait for five seconds before the cluster node is
considered unreachable.
Storage Replica
Storage Replica provides disaster recovery by enabling block-level,
storage-agnostic, synchronous replication between servers. You can
use Storage Replica in a wide range of architectures, including
stretched clusters, cluster-to-cluster, and server-to-server.
Private storage
Local disks on cluster nodes are referred to as private storage.
When creating a cluster, you need to use tools available in Windows Server. Also, there are some
important features that you need to consider, as Table 3 describes:
Table 18: Windows Server features for clusters
Term
Definition
Cluster performance history
Cluster performance history is a new feature in Windows Server 2019
and later. It gives Storage Spaces Direct administrators easy access to
historical compute, memory, network, and storage measurements
across an organization’s servers, drives, volumes, VMs, and many other
resources. The performance history is automatically collected and
stored on a cluster for up to a year. The metrics are aggregated for all
the servers in the cluster, and they can be examined by using the
Windows PowerShell Get-ClusterPerf alias, which calls the GetClusterPerformanceHistory cmdlet, or by using Windows Admin
Center.
234
20740 Installation, Storage, and Compute with Windows Server
Term
Definition
Cross-domain cluster migration
With Windows Server 2019 and later cross-domain cluster migration,
you can migrate clusters from one domain to another by using a series
of Windows PowerShell scripts without destroying your original cluster.
Persistent memory
Windows Server 2019 and later have persistent memory (PMEM),
which offers a new type of memory technology that delivers a
combination of capacity and persistence. PMEM is super-fast storage
on a USB flash drive. PMEM deploys by using Storage Spaces Direct.
System Insights
The System Insights feature of Windows Server provides machine
learning and predictive analytics to analyze data on your servers.
Windows Admin Center
Windows Admin Center offers a browser-based management tool that
you can use to manage Windows Server computers with no Azure or
cloud dependencies. You can use Windows Admin Center to add
failover clusters to a view and to manage your cluster, storage,
network, nodes, roles, VMs, and virtual switch resources.
Cluster quorum in Windows Server
For the cluster to be functional and operational, it needs to have enough nodes up and running.
A cluster quorum defines how many cluster nodes are enough for the cluster to continue to run.
Each cluster node in a failover cluster has one vote. A witness component, such as disk, file share,
or Azure cloud witness also can have a vote in a cluster. A quorum represents a majority of votes in
a specific cluster configuration. If an even number of nodes is online and offline, another vote,
assigned to a witness element, is used to determine the majority. Each cluster component that has
a voting right also has a copy of the cluster configuration. The Cluster service always works to keep
all the copies synced.
If more than half of the nodes in the cluster don’t function or can’t communicate with each other,
the cluster stops providing failover protection. However, this scenario means that each node (or
set of nodes) can work as an independent cluster. This scenario is called cluster partitioning, and a
quorum is used to prevent this scenario. A quorum prevents two or more nodes from concurrently
operating as a failover cluster resource. In scenarios when nodes have lost connection with each
other, a vote of cluster witness becomes very important, especially when it’s not possible to
achieve a clear majority among the node members. In this situation, more than one node might try
to establish control over a cluster resource. This can easily lead to data corruption or having
multiple instances of the same resource available on the network.
235
20740 Installation, Storage, and Compute with Windows Server
So, if the number of votes within a cluster becomes lower than a majority, the cluster will stop
running and won’t provide failover functionality. However, nodes that are up and running will still
listen on port 3343 in case other nodes occur on the network again. When a majority is achieved
again, Cluster service starts.
The process of achieving quorum in a cluster
Because a given cluster has a specific set of nodes and a specific quorum configuration, the
cluster software on each node stores information about how many votes constitute a quorum for
that cluster. If the number is less than the majority, the cluster stops providing services. Nodes will
continue to listen for incoming connections from other nodes on port 3343 in case they occur
again on the network, but the nodes won’t begin to function as a cluster until a quorum is
achieved.
A cluster must complete several phases to achieve a quorum. When a given node comes up, it
determines whether there are other cluster members with which it can communicate. This process
might be in progress on multiple nodes at the same time. After communication is established with
other members, the members compare their membership views of the cluster until they agree on
one view, which is based on time stamps and other information. A determination is made whether
this collection of members has a quorum; or has enough members so that the total creates
sufficient votes to avoid a split scenario. A split scenario means that another set of nodes that are
in this cluster are running on a part of the network that’s inaccessible to these nodes. Therefore,
more than one node could be actively trying to provide access to the same clustered resource. If
there aren’t enough votes to achieve a quorum, the voters— meaning the currently recognized
members of the cluster—wait for more members to emerge. After at least the minimum vote total is
attained, the Cluster service begins to bring cluster resources and applications into service. With a
quorum attained, the cluster becomes fully functional.
Quorum modes in Windows Server Failover Clustering
A method to establish a quorum within a cluster isn’t unique. It depends on the quorum mode that
you select. Based on the quorum mode, you determine which components will have voting rights
when defining a quorum.
Windows Server 2022 supports the following quorum modes:

Node majority. In this quorum mode, each available and communicating node can vote. The
cluster functions only with a majority of the votes. This model is preferred when a cluster
consists of an odd number of server nodes. For this scenario, no witness is necessary to
maintain or achieve a quorum.

Node and disk majority. In this quorum mode, each cluster node and a designated disk in the
cluster storage (the witness disk) have a vote when they’re available and in communication.
The cluster functions only with a vote majority. This model is based on an even number of
server nodes being able to communicate with one another in the cluster and with the witness
disk.
236
20740 Installation, Storage, and Compute with Windows Server

Node and file share majority. In this quorum mode, each node and a designated file share,
the file share witness, can vote when they’re available and in communication. The cluster
functions only with a vote majority. This model is based on an even number of server nodes
being able to communicate with one another in the cluster and with the file share witness. This
quorum mode is very similar to the previous one, but we use file share location instead of disk
witness.

No majority—disk only. In this scenario, the cluster has a quorum if one node is available and
in communication with a specific disk in the cluster storage. Only the nodes that are also in
communication with that disk can join the cluster.
Besides the classic quorum modes that we discussed, Windows Server 2019 and later supports
another quorum mode called a dynamic quorum. This mode is more flexible than other quorum
modes and can provide more cluster availability in some scenarios.
As its name implies, this quorum mode dynamically adjusts the number of votes needed for a
quorum based on the number of cluster nodes that are online. For example, if we have a five-node
cluster, and place two of the nodes in a paused state, we’ll still have a quorum. But if one of the
remaining nodes fails, a quorum will be lost, and in classic quorum modes the cluster will go
offline. With a dynamic quorum, the cluster adjusts the voting of the cluster when the first two
servers are offline, making the number of votes for a quorum of the cluster two instead of three.
With this, even when one more node fails, the cluster with a dynamic quorum stays online.
Related to dynamic quorum, a dynamic witness is a witness resource that gets a voting right based
on the number of nodes in a cluster. If we have an odd number of nodes, then the witness doesn’t
have a voting right. But, if the number of nodes is even, the witness will have a vote. In classic
quorum modes, we usually need a witness resource when the number of cluster nodes is even.
But, with dynamic witness, we can configure a witness resource in any scenario. This is the best
practice scenario for cluster configuration and default witness mode.
For the witness resource, you can choose to have a disk, file share, or Azure Cloud Witness. You
should use a witness disk in scenarios where a cluster is deployed on a single location, while a file
share witness and Azure Cloud Witness are more appropriate when cluster nodes span multiple
locations. File share witness and Azure Cloud Witness don’t store a copy of the cluster database,
while a witness disk does.
Clustering roles
When you’ve created failover cluster infrastructure, assigned nodes, and configured quorum and
witness options, it’s time to add a role to the cluster you created. Up to this point, you still don’t
have any highly available services running. As you learned earlier in this module, you should first
establish the clustering infrastructure before making any specific service (role) highly available.
Not every Windows server role can be configured in a cluster. Also, some roles that you want to
place in a cluster require that another role is installed as a prerequisite.
237
20740 Installation, Storage, and Compute with Windows Server
Table 4 lists the supported Windows Server roles for clustering, and roles and features that need
to be installed as a prerequisite:
Table 19: Windows Server clustered roles
Clustered role
Prerequisites
Namespace Server
Namespaces (part of the File Server role)
Distributed File System (DFS) Namespace Server
DHCP Server role
Distributed Transaction Coordinator (DTC)
None
File Server
File Server role
Generic Application
Not applicable
Generic Script
Not applicable
Generic Service
Not applicable
Hyper-V Replica Broker
Hyper-V role
iSCSI Target Server
iSCSI Target Server (part of the File Server role)
iSNS Server
iSNS Server Service feature
Message Queuing
Message Queuing Services feature
Other Server
None
Virtual Machine
Hyper-V role
WINS Server
WINS Server feature
Lab 8: Plan and implement a highavailability and disaster-recovery solution
Please refer to our online lab to supplement your learning experience with exercises.
238
20740 Installation, Storage, and Compute with Windows Server
Knowledge check
Check your knowledge by answering these questions.
1. What information do you need for planning a failover cluster implementation?
2. Which feature provides high availability for applications or services running on the VM that
don’t have to be compatible with failover clustering?
a.
Site-aware clustering
b.
Client clustering
c.
Live clustering
d.
Host clustering
3. Which type of witness is ideal when shared storage isn’t available or when the cluster spans
geographical locations?
4. What are Hyper-V Replica prerequisites?
a. Active Directory membership
b. Network connectivity between Hyper-V hosts
c. A digital certificate
d. Internet connectivity
5. Which service does Windows Server Backup use for creating volume snapshots?
a. Offline Files
b. Storage Service
c. Volume Shadow Copy
d. Virtual Disk
Note: To find the answers, refer to the Knowledge check slides in the accompanying
Microsoft PowerPoint presentation.
239
20740 Installation, Storage, and Compute with Windows Server
Module 8: Implement and manage
failover clustering
Most organizations need to ensure their users have continuous services, such as network
connections, and apps. Therefore, it’s vital that such organizations plan for, create, and manage
failover clustering, which is a main Windows Server 2022 technology that supplies high availability
for applications and services. In this module, you’ll learn about failover clustering, failoverclustering components, and implementation techniques.
By completing this module, you’ll achieve the knowledge and skills to:

Plan for a failover-clustering implementation.

Create and configure a failover cluster.

Maintain a failover cluster.

Troubleshoot a failover cluster
Lesson 1: Plan for a failover cluster
Planning a failover cluster is vital for a high-availability solution because organizations depend on
high-availability technologies to host their business-critical services and data. You must be able to
deploy your solution fast, manage it easily, test and verify failover and failback scenarios, and
anticipate behaviors that result from failures.
By completing this lesson, you’ll achieve the knowledge and skills to describe how to:

Prepare to implement a failover cluster.

Plan your failover-cluster storage.

Determine the hardware requirements for a failover-cluster implementation.

Forecast network requirements for a failover-cluster implementation.

Project infrastructure and software requirements for a failover cluster.

Identify security considerations.

Plan for quorum in Windows Server 2022.

Prepare for the migration and upgrading of failover clusters.

Plan for multisite (stretched) clusters.
Prepare to implement failover clustering
Before you implement a failover-clustering technology, we recommend that you identify services
and applications that must be highly available, because your configuration of failover clustering
might vary based on the application or service you need to make highly available.
240
20740 Installation, Storage, and Compute with Windows Server
Failover clustering is commonly used for stateful applications that use a single data set, such as a
database. Also, it’s quite common to use failover clustering for Microsoft Hyper-V virtual machines
(VMs) and for stateful applications that you implement inside Hyper-V VMs. Conversely, stateless
applications that don’t share a single set of data between nodes aren’t appropriate for failover
clustering. An example of a stateless application is the Internet Information Services (IIS) Server.
Failover clustering uses IP-based protocols, so it works only with IP-based applications. IP version
4 (IPv4) is more commonly used, but failover clustering does support both IPv4 and IP version 6
(IPv6).
Failover clustering enables clients to reconnect to highly available services automatically after
failover. If the client application doesn’t reconnect automatically, the user must restart it.
While planning the node capacity for your failover cluster, you must consider several factors,
including that:

Highly available applications are distributed if a node fails, which means they should be
dispersed among the remaining nodes to prevent overloading a single node.

Each node has enough capacity to service those highly available services or applications that
you indicate will be distributed to it when another node fails. This capacity should be sufficient
to ensure that nodes aren’t running at near capacity after a failure occurs. You must plan for
resource utilization, so you don’t experience performance decreases after a node failure.

You use hardware that has similar capacity for all nodes in a cluster, which simplifies the
failover-planning process because the failover load will distribute evenly among surviving
nodes.

You’re familiar with hardware and software requirements for failover cluster implementation.
Make sure that you always run the Validate a Configuration Wizard when creating a cluster to
make sure that no blocking issues will occur in production.

You always install the same Windows Server features and roles on each node, which helps
avoid performance and compatibility issues.
You also should examine all cluster-configuration components to identify single points of failure,
many of which you can remedy with simple solutions such as adding storage controllers,
redundant power-supply units, teaming network adapters, and using multipathing software. These
solutions reduce the probability that a single device’s failure will cause a cluster failure. Typically,
server-class computer hardware provides you with options to configure power redundancy by using
multiple power supplies and create a redundant array of independent disks (RAID) sets for diskdata redundancy.
Learn more: To review what’s new or updated with respect to failover clustering in the
current Windows Server version, refer to What's new in Failover Clustering.
241
20740 Installation, Storage, and Compute with Windows Server
Failover cluster storage
Most scenarios where you’re deploying failover clusters require that you also deploy shared
storage for a failover cluster. An application that’s being deployed in a failover cluster typically
requires shared storage for its data. This storage should be accessible by all cluster nodes, but
only the active node has ownership over the shared storage to maintain data consistency.
Note: Shared cluster storage and cluster witness disks aren’t the same. A cluster application
uses shared storage. However, cluster quorum is maintained by using a cluster witness disk,
which also stores a copy of the cluster configuration database.
For your cluster-shared storage, we recommend that you configure, at the hardware level, multiple
separate disks or logical unit numbers (LUNs). Additionally, you must consider several factors with
respect to cluster storage, including that you must:

Use basic disks, not dynamic disks.

Format partitions on cluster storage with New Technology File System (NTFS) or Resilient File
System (ReFS).

Isolate storage devices from other clusters. There must be one cluster per storage device, and
the servers for different clusters can’t access the same storage devices.
The Windows Server Failover Clustering feature supports five options for cluster-shared storage,
including:

Shared serial attached small computer system interface (SCSI) (SAS) is the lowest cost option
for shared storage. However, it’s not very flexible for deployment because cluster nodes must
be located close together, physically. Additionally, the shared storage devices that support
shared serial attached SCSI, or SAS, have a limited number of connections for cluster nodes.

iSCSI. Internet SCSI (iSCSI) is a type of storage that transmits SCSI commands over IP
networks. It’s not expective to implement because it doesn’t require specialized network
hardware and offers good performance, particularly when you use 1 gigabit per second (Gbps)
or 10 Gbps with Ethernet as the physical medium for data transmission. The iSCSI target
software can be set up on any server to provide iSCSI storage to clients.

Fibre Channel. Fibre Channel storage area networks (SANs) typically offer better performance
but are more expensive to implement than iSCSI because they require specialized knowledge
and hardware to set up.

Shared virtual hard disk. In Windows Server, you can use a shared virtual hard disk (VHD) as
storage for VM guest clustering. A shared VHD should be located on a Cluster Shared Volume
(CSV) or Scale-Out File Server cluster or added to two or more VMs in a guest cluster. Do this
by connecting to a SCSI or guest Fibre Channel interface.

Scale-Out File Server. For some failover cluster roles, you can implement Server Message
Block (SMB) storage as the shared location. Typically, you’d use SMB when you’ve
implemented Hyper-V and SQL Server in a cluster.
242
20740 Installation, Storage, and Compute with Windows Server
If you’re using a SAS or Fibre Channel, all clustered servers must have identical storage-stack
components and Microsoft Multipath I/O (MPIO) software and device-specific module software
components. You must have at least one dedicated network adapter or host bus adapters (HBAs)
for each clustered server if using iSCSI. Additionally, you can’t use the network that you use for
iSCSI for network communication. Additionally, we highly recommend that you use storage systems
that are certified for Windows Server when you’re configuring failover-cluster storage and that you
confirm with manufacturers and vendors that the storage is compatible with the failover clusters in
the Windows Server version you’re running. This includes drivers, firmware, and software used for
the failover-cluster storage.
You also can use failover clustering to make storage highly available. Storage technology in
Windows Server, or Storage Spaces, is supported with failover clustering, so you can make
clustered storage spaces. This implementation can help you avoid data-access failures, volumes
becoming unavailable, and server node failures.
Hardware requirements for a failover-cluster
implementation
When selecting hardware configuration for cluster nodes, you need to understand the hardware
requirements for failover clustering, including availability and Microsoft support. You must ensure
that you:

Use hardware that’s certified for Windows Server.

Have the same or similar hardware installed on each failover cluster node. This is especially
important for central processing units (CPUs), network adapters, and storage controllers. For
storage controllers, you also must ensure that they have the same firmware version on all
cluster nodes, which helps you avoid compatibility and capacity issues.

Use iSCSI storage connections for which each clustered server has one or more dedicated
network adapters for the cluster storage. Don’t use the network that you use for iSCSI storage
connections for intercluster or client-network communication. Additionally, the network
adapters used for connecting to the iSCSI storage target must be identical in clustered servers
and utilize adapters that are Gigabit Ethernet or faster.

Confirm that all servers that will participate as cluster nodes pass all tests in the Validate a
Configuration Wizard. This is very important should you require support from Microsoft. We’ll
discuss this wizard later in this module.

Check that each node is running the same processor architecture and has the same processor
family, either from Intel or the Advanced Micro Devices (AMD) family of processors.
243
20740 Installation, Storage, and Compute with Windows Server
Network requirements for a failover-cluster
implementation
Each failover cluster configuration is highly dependent on the network infrastructure. It’s used not
just to connect clients to a cluster and services within a cluster, but also to connect cluster nodes,
cluster storage, and other resources. Because of this, it’s critical that you configure networks for
clusters properly, and pass all checks from the validation wizard.
However, it’s not just physical network connections that you need. You also must set up and
configure certain network properties and services, including:

Network adapters. Each node must have network adapters that are identical and have the
same IP protocol version, flow-control capabilities, speed, and duplex. Also, redundant
networks and network equipment must be used to connect nodes, so any failure enables
nodes to keep communicating. Single network redundancy can be offered using network
adapter teaming.

IP addresses. Ensure there are no IP address conflicts and that IP addresses are reachable
for nodes, storage, and other cluster resources. Cluster nodes can use static and dynamic IP
addresses, although you must ensure consistency across all nodes. You can also use the IPv6
protocol for all network traffic.

Subnets. Sometimes, you must configure separate subnets for communication between cluster
nodes and with cluster clients. Connection to storage, such as iSCSI, also might require a
separate subnet. Additionally, confirm that cluster nodes can exchange heartbeats and open
required port, if necessary. Cluster heartbeats use UDP unicast on port 3343.

Support RSS and RDMA. It’s recommended that your network adapters support RSS and
RDMA. Receive side scaling (RSS) is a technology implemented in network drivers that allows
the distribution of network-receive processing across multiple CPUs. Remote Direct Memory
Access (RDMA) is a networking technology that provides high-throughput communication with
minimum CPU usage. You can configure RDMA on network adapters that are bound to a HyperV Virtual Switch in Windows Server 2022. You can verify network adapter’s RSS and RDMA
compatibility by using the Windows PowerShell cmdlets Get-NetAdapterRSS and GetSMBServerNetworkInterface.
Demonstration: Verify a network adapter's RSS and RDMA
compatibility on an SMB server
For high-level demonstration steps, refer to the Notes, which are in the accompanying Microsoft
PowerPoint presentation.
244
20740 Installation, Storage, and Compute with Windows Server
Infrastructure and software requirements for a failover
cluster
Your directory infrastructure and software that you use can also greatly affect your failover
clustering implementation. As you plan for a failover cluster, ensure that you are aware of
infrastructure and software requirements. To be able to successfully implement supported failover
clustering configuration, you need to:

Ensure that you run the supported version of Active Directory domain controllers (DCs) and
Windows Server 2016 or newer.

Ensure domain-functional level and forest-functional level use Windows Server 2012 or newer.

Run the supported version of Domain Name System (DNS) servers that use Windows Server
2016 or newer. Additionally, DNS must exist on your network for failover clustering to work.

Although there are some scenarios where cluster nodes can be non-domain joined computers,
cluster nodes typically should be joined to the same Active Directory Domain Services (AD DS)
domain. It’s also possible to have cluster nodes in different AD DS domains for some specific
scenarios.

The application that you configure for high availability should support the Windows Server
2022 operating system (OS).
We recommend that you run the same edition of Windows Server 2022 on each cluster node. The
edition can be Windows Server 2022 Standard or Windows Server 2022 Datacenter. The nodes
also must have the same software updates. Depending on which role you cluster, you also can
utilize a Server Core installation of Windows Server 2022 to meet software requirements.
Note: With Windows Server 2016 and newer operating systems and Cluster Operating
System Rolling Upgrades, cluster nodes can have different operating systems, which is
useful so the cluster can run with this configuration, especially during an upgrade process.
Windows Server 2012 and newer includes Cluster-Aware Updating (CAU) technology, which
you can use to maintain updates on cluster nodes. Lesson 3, “Maintain a failover cluster”
discusses this feature in more detail.
Security and AD DS considerations
In each failover clustering implementation, make sure that you ensure security, because potential
security issues might threaten the solution’s high availability. Failing to establish a security
baseline on cluster nodes or cluster applications can result in scenarios where an unauthorized
user might gain access to your cluster resources, delete files, or shut down cluster nodes. You
need to plan and configure your security settings thoroughly to guard against unauthorized access
to cluster resources. Also, it’s highly recommended unauthorized users cannot physically access
cluster nodes.
245
20740 Installation, Storage, and Compute with Windows Server
Deploying antimalware software on cluster nodes might not always be possible. In such scenarios,
deploy the cluster nodes in a subnet that you protect with firewalls and intrusion-detection devices.
Most commonly, cluster nodes are AD DS domain members, and the failover cluster creates a
Cluster Named Object (CNO) in AD DS. Cluster nodes communicate by using Kerberos for CNO
authentication.
Since Windows Server 2019, Failover Clusters no longer use NTLM authentication. Instead,
Kerberos and certificate-based authentication are used exclusively, enabling the deployment of
failover clusters in environments where NTLM is disabled.
Older Windows Server versions, such as Windows Server 2012 R2 and Windows Server 2016,
allowed you to create an Active Directory-detached cluster, which doesn’t have network-name
dependencies in AD DS. For this type of cluster, you need to register the cluster network name and
network names for clustered roles in your local DNS, but there’s no need to create corresponding
computer objects for cluster and clustered roles in AD DS.
Deployment limitations do exist for Active Directory-detached clusters. One limitation is that you
can’t use Kerberos authentication when accessing cluster resources in AD DS because no
computer object exists. Instead, you need to use NTLM authentication, which isn’t as secure as
Kerberos authentication and hasn’t been supported since Windows Server 2019. We recommend
that if your setup requires Kerberos authentication, you don’t deploy Active Directory-detached
clusters.
Windows Server 2016 and newer supports several types of clusters, which you use depending on
your domain-membership scenario, including:

Single-domain clusters. Cluster nodes are members of the same domain.

Workgroup clusters. Cluster nodes aren’t joined to the domain (workgroup servers).

Multi-domain clusters. Cluster nodes are members of different domains.

Workgroup and domain clusters. Cluster nodes are members of domains and members that
aren’t joined to the domain (workgroup servers).
Quorum in Windows Server 2022
A cluster is only functional when it has enough nodes running. A cluster quorum defines how many
cluster nodes are enough for a cluster to continue to run.
Each cluster node in a failover cluster has one vote. A witness component, such as disk, file share,
or Azure cloud witness also can have a vote in a cluster. A quorum represents a majority of votes in
a specific cluster configuration. If an even number of nodes is online and offline, another vote,
assigned to a witness element, decides the majority. Each cluster component that has a voting
right also has a copy of the cluster configuration. The Cluster service is always working to ensure
all copies are synchronized.
246
20740 Installation, Storage, and Compute with Windows Server
If more than half of the nodes in the cluster don’t function or can’t communicate with each other,
the cluster stops providing failover protection. However, this scenario means that each node (or
set of nodes) can work as an independent cluster, known as cluster partitioning. You can use a
quorum to avert this, as it won’t allow two or more nodes from concurrently operating as a failover
cluster resource. In scenarios when nodes lose connection with each other, a vote of cluster
witness becomes crucial, especially if it’s not possible to achieve a clear majority among node
members. In this situation, more than one node might try to establish control over a cluster
resource. This can easily lead to data corruption or having multiple instances of the same resource
available on the network.
So, if the number of votes within a cluster becomes lower than a majority, the cluster will stop
running and won’t provide failover functionality. However, nodes that are up and running will still
listen on port 3343 in case other nodes occur on the network again. When a majority is achieved
again, Cluster service starts.
The process of achieving quorum in a cluster
A cluster has distinct nodes and a quorum configuration, and each node’s cluster software stores
data regarding the number of votes that represent a quorum for it. The cluster quits providing
services when the number falls below a majority, although nodes continue to listen for incoming
connections on port 3343. However, they won’t work as a cluster again until a quorum is met.
There are several phases a cluster must go through to achieve quorum, including that:

A cluster verifies whether there are cluster members it can communicate with if a node comes
up. This can occur on several nodes simultaneously.

Cluster members, after communication is initiated, compare their membership views until they
agree on one. Factors that help members determine agreement are time stamps and other
information.

A group of cluster members then decides if a quorum is achieved, thereby avoiding a split
scenario, where another set of nodes in the cluster is running on a network portion that the
group of cluster members can’t access. This could be an issue because it means that one
node could be attempting to offer access to the same clustered resource.

If the group can’t reach a quorum, the cluster’s recognized members (or current voters) must
wait for more members to appear.

After the minimum vote total is attained, the Cluster service brings cluster resources and
applications into service.

The cluster becomes fully functional when a quorum is reached.
Quorum modes in Windows Server Failover Clustering
A method to establish a quorum within a cluster isn’t unique. It depends on the quorum mode that
you select. Based on the quorum mode, you determine which components will have voting rights
when defining a quorum.
247
20740 Installation, Storage, and Compute with Windows Server
Windows Server 2022 supports the following quorum modes:

Node majority. In this quorum mode, each available and communicating node can vote, and
the cluster functions when it has a majority of votes. This model is preferable when a cluster
consists of an odd number of server nodes. For this scenario, no witness is necessary to
maintain or achieve a quorum.

Node and disk majority. In this quorum mode, each cluster node and a witness disk (a
designated disk in the cluster storage) have a vote when they’re available and in
communication. The cluster functions only with a vote majority. For this quorum mode,
an equal number of server nodes must be able to communicate with each other in the
cluster and with the witness disk.

Node and file share majority. In this quorum mode, each node and a designated file share,
the file share witness, can vote when they’re available and in communication. The cluster
functions only with a vote majority. For this quorum mode, an even number of the cluster’s
server nodes must be able to communicate with each other and with the file share witness.
This quorum mode is like the previous one but uses file share location instead of disk witness.

No majority—disk only. In this quorum mode, the cluster attains quorum if one node is
available and can communicate with a specific disk in the cluster storage. The only nodes
that can join the cluster are those that can communicate with that disk.
Besides the classic quorum modes that we discussed, Windows Server 2019 and later supports
another quorum mode called a dynamic quorum. This mode is more flexible than other quorum
modes and can provide more cluster availability in specific scenarios.
As its name implies, this quorum mode dynamically adjusts the number of votes needed for a
quorum based on the number of cluster nodes that are online. For example, if there’s a five-node
cluster and place two of the nodes in a paused state, there will still be a quorum. However, if one
of the remaining nodes fails, quorum is lost and in classic quorum modes, the cluster goes offline.
With a dynamic quorum, the cluster adjusts the voting of the cluster when the first two servers are
offline, which makes the number of votes for a quorum of the cluster two instead of three. With
this, even when one more node fails, the cluster with a dynamic quorum stays online.
Related to dynamic quorum, a dynamic witness is a witness resource that gets a voting right based
on the number of nodes in a cluster. If there’s an odd number of nodes, the witness doesn’t have a
voting right. But, if the number of nodes is even, the witness does have a vote. In classic quorum
modes, a witness resource typically is needed when there’s an even number of cluster nodes.
However, with dynamic witness, you can configure a witness resource in any scenario. This is the
best practice scenario for cluster configuration and default witness mode.
For the witness resource, you can choose to have a disk, file share, or Azure Cloud Witness. You
should use a witness disk in scenarios where a cluster is deployed on a specific location, while
a file share witness and Azure Cloud Witness are more appropriate when cluster nodes span
multiple locations. File share witness and Azure Cloud Witness don’t store a copy of the cluster
database, while a witness disk does.
248
20740 Installation, Storage, and Compute with Windows Server
Note: Always consider the capacity of a cluster’s nodes and whether they can support the
services and applications that might fail over to that node.
Learn more: For more information about cluster quorum, refer to Understanding cluster and
pool quorum.
Plan for migrating and upgrading failover clusters
Windows Server 2016 and newer operating systems have a new process for upgrading a failover
cluster, named Cluster Operating System Rolling Upgrade. It’s useful when you upgrade the OS on
cluster nodes.
If you’re performing cluster OS upgrades, you first must upgrade the cluster OS before you upgrade
the cluster’s functional level. For example, let’s say you want to upgrade a two-node cluster from
Windows Server 2016 to Windows Server 2022. You can do that by draining the roles from one
node, taking the node offline, and then removing it from the cluster. After that, you upgrade that
node OS to Windows Server 2022, and when you’re done, you add the node back to the cluster.
The cluster will continue to run on the Windows functional level of Windows Server 2016, as
there’s one more node running this OS. You can then bring the roles back to the Windows Server
2022 node and drain them from another node. You then remove the Windows Server 2016 node
from the cluster, upgrade it, and add it back to the cluster. Finally, now that both nodes are
running Windows Server 2022, you can upgrade the functional level by running the following
Windows PowerShell command:
Update-ClusterFunctionalLevel
For example, let’s assume that we need to upgrade a Hyper-V failover cluster. You can perform this
in Windows Server 2022 without downtime.
The upgrade steps for each node in the cluster include:
1. Pause the cluster node and drain all the VMs that run on the node.
2. Migrate the VMs that run on the node to another node in the cluster.
3. Perform a clean installation to replace the cluster node OS with Windows Server 2022.
4. Add back the node now running the Windows Server 2022 OS to the cluster.
5. Next, upgrade all nodes to Windows Server 2022.
6. Finally, use the Windows PowerShell cmdlet Update-ClusterFunctionalLevel to upgrade
the cluster functional level to Windows Server 2022.
249
20740 Installation, Storage, and Compute with Windows Server
Note: In the scenario where cluster nodes are running an older and newer version of OS, the
cluster is running in mixed mode; however, all nodes run in a functional mode that is equal
to the oldest version of OS that participates in a cluster. In mixed mode, some new features
of Windows Server failover clustering might not be available, such as Site-aware Failover
Clusters, Workgroup and Multi-domain Clusters, Virtual Machine Node Fairness, Virtual
Machine Start Order, Simplified SMB Multichannel, and Multi-NIC Cluster Networks.
Plan for multi-site (stretched) clusters
You can implement a stretched cluster if you need to have highly available services in more than
one site or location. Failover clusters that span more than one site can solve several specific
problems. However, they also present specific challenges, so you need to carefully plan such a
configuration.
In a stretched cluster scenario, replication occurs between sites, and each site typically has
a separate storage system. Cluster storage replication technology enables each site to be
independent and provides fast access to the local storage infrastructure. Obviously, with
separate storage systems, you can’t share a single disk between sites. However, to maintain
data consistency, stretched clusters require that you implement storage replication technology
so that storage systems on different sites are synchronized so that each cluster node can access
the latest version of data after it takes ownership over a clustered role.
There are some specific advantages to deploying a stretched cluster instead of a simpler
deployment of a redundant server on another location, including:

When a complete site fails, a stretched cluster can automatically fail over the clustered role to
another site, which doesn’t happen automatically if you deploy independent servers running
the same role on another site.

Cluster configuration automatically replicates to each node in a stretched cluster, which results
in reduced administrative overhead than you’d need when dealing with a cold standby server.
That requires you to manually replicate changes.

The automated processes in a stretched cluster reduce the possibility of human error, a risk
that’s present in manual processes.
There are two steps of stretched clusters that Windows Server 2022 supports: active-passive and
active-active. This refers to the direction of storage replication. In the active-passive scenario, you
set up active-passive site storage replication, where data replicates from the preferred (active) site
to the failover (passive) site. In the active-active scenario, replication can happen bi-directionally
from either site.
The active-passive scenario is more commonly used. A passive site doesn’t offer roles or
workloads for clients and simply waits for a failover from the active site.
250
20740 Installation, Storage, and Compute with Windows Server
A stretched failover cluster results in higher costs and complexity so it typically isn’t a good option
for all applications or businesses. If you’re thinking about deploying a stretched cluster, it’s
important to consider the importance of applications to your business, the type of applications
you’re dealing with, and other possible solutions. Certain applications can use log shipping or other
processes to support multisite redundancy and can continue to reach sufficient availability with
only small cost and complexity increases.
To maintain data consistency in a stretched cluster scenario, we need technology to synchronize
storage systems between various sites. Windows Server provides Storage Replica technology,
which enables the replication of volumes between servers or clusters for a stretched cluster
scenario. Use Storage Replica to configure stretched clusters that extend across two sites while
ensuring nodes stay synchronized.
Storage Replica technology supports two types of synchronization as follows:

Synchronous replication. Implemented typically on low-latency networks and sites that have
crash-consistent volumes. This helps prevent data loss at the file-system level during a failure.

Asynchronous replication. Synchronizes data across sites, typically over network links that
have higher latencies. However, there’s no guarantee that the sites have identical copies of
the data if a failure occurs.
Lesson 2: Create and configure a new
failover cluster
After you configure a clustering infrastructure, you should configure specific roles or services that
you want to be highly available. You cannot cluster all roles. Therefore, you should first identify the
resource that you want to place in a cluster, and then verify if the resource can be clustered. In this
lesson, you will learn how to configure roles and applications in clusters and configure cluster
settings.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe the Validate a Configuration Wizard and cluster support-policy requirements.

Explain the process for creating a failover cluster.

Describe the process for configuring roles.

Explain how to manage cluster nodes.

Describe the process for configuring cluster properties.

Describe the process of configuring failover and failback.

Describe the process of configuring storage.

Describe the process of configuring networking.

Describe the process of configuring quorum options.
251
20740 Installation, Storage, and Compute with Windows Server
The validation wizard and the cluster support-policy
requirements
Because failover clusters are used to achieve high availability for critical applications and services,
it’s important that a cluster itself is stable and reliable. Because of this, the requirements that you
must fulfill when creating a failover cluster are more demanding compared to the requirements for
the single server implementation. To help administrators validate their cluster configuration and to
make sure that the cluster configuration can be supported by Microsoft, the Validate a
Configuration Wizard is provided in the Failover Clustering Manager console.
This wizard, usually used before you create a cluster, performs a number of various tests to ensure
that cluster components are correctly configured and supported in a clustered environment. Tests
performed during this wizard include system configuration tests, cluster storage tests, network
tests, and many others.
Although it’s possible to create a cluster without running the Validate a Configuration Wizard, it’s
not recommended to do so, for many reasons. The wizard can help you identify hardware and
software issues that you’re unaware of, but which might affect cluster stability or performance.
Also, the wizard ensures that each cluster component can communicate with other cluster
components in a proper way, through a series of tests that aren’t easy to perform manually.
Note: Microsoft only supports a cluster solution when the whole configuration passes all
validation tests, and all hardware is certified for the Windows Server version being run by
cluster nodes.
You can run the Validate a Configuration Wizard at any time—before or after you create a cluster.
Most commonly, the wizard is started before you create a cluster, but you should also make sure
that you run it after making the following changes to a cluster or nodes in a cluster:

Adding a node to the cluster.

Upgrading or replacing the storage hardware.

Upgrading the firmware or the driver for HBAs.

Updating the multipathing software or the version of the device-specific module.

Changing or updating a network adapter.
After the Validate a Configuration Wizard finishes its checks and tests, you receive one of the
following three indicators:

A green check mark (passed). Indicates that the failover cluster configuration is valid, and you
can safely proceed with creating a cluster or using one if you’ve run the wizard on an existing
cluster configuration.
252
20740 Installation, Storage, and Compute with Windows Server

A yellow yield sign (warning). Indicates that what’s being tested in the proposed failover cluster
isn’t in agreement with Microsoft’s best practices. You must determine whether the cluster’s
configuration is acceptable for the cluster’s environment, the cluster requirements, and the
roles that the cluster hosts. A warning doesn’t mean that you can’t create or use a cluster, but
you should make sure that you’re aware of potential issues that might arise.

A red circle with a single bar (failed). Means that you can’t use the current configuration
to create a Windows Server failover cluster. It’s important to note that if a test fails, no
subsequent tests run and you have to fix the “failed” issue before you can create a failover
cluster.
The process for creating a failover cluster
After you’ve validated your configuration for a failover cluster by using the Validate a Configuration
Wizard, you can proceed to create a cluster. In some scenarios, you’ll be adding cluster storage
during the process of cluster creation. If that’s the case, you should make sure that all nodes can
access the storage before starting to create a cluster.
Note: To create a cluster or add servers to it, you have to sign into the domain using an
account that has administrator rights and permissions on all of the cluster’s servers. You
can use a Domain Users account that’s in the Administrators group on each clustered
server, and don’t have to use a Domain Admins account. Also, if the account isn’t a Domain
Admins account, the account or group in which the account is a member must have the
Create Computer Objects permission in the domain.
The Create Cluster Wizard is used to create a new cluster. This wizard is available in the Failover
Cluster Manager console. You can also use PowerShell to perform the same procedure. During the
wizard, you need to specify servers that will be used as cluster nodes, cluster names, and IP
addresses that the new cluster will use.
After the wizard runs, a Summary page displays. Select the View Report option to access a report
on the tasks that the wizard performed. After you close the wizard, you can find the report at
<SystemRoot>\Cluster\Reports\, where SystemRoot is the location of the OS; for example,
C:\Windows.
Tip: When you create a failover cluster, you still don’t have any highly available services or
applications. At this point, you’ve just created a highly available structure that you will use to
deploy highly available apps.
253
20740 Installation, Storage, and Compute with Windows Server
To validate and create a new cluster by using Windows PowerShell, use the Test-Cluster,
New-Cluster and Add-ClusterNode cmdlets. For example, the following cmdlets will validate
cluster configuration with servers LON-SVR2 and LON-SVR3, and create a cluster with the IP
address 172.16.0.10 and with the name TestCluster:
Test-Cluster SEA-SVR2, SEA-SVR3
New-Cluster -Name TestCluster -Node LON-SVR2 -StaticAddress 172.16.0.10
Add-ClusterNode -Name LON-SVR3
You can also create and manage failover clusters by using Windows Admin Center. Windows Admin
Center is a browser-based management tool that allows you to manage Windows Server computers
locally or remotely. With this tool, you can manage failover cluster nodes as individual servers by
adding them as server connections in Windows Admin Center. However, you can also add them as
failover clusters to review and manage cluster resources, storage, network, nodes, roles, VMs, and
virtual switches. Microsoft recommends that you use Windows Admin Center for most of the
administration tasks that you perform in the graphical user interface (GUI). In Windows Server
2022, you’ll be prompted to switch from the Failover Cluster Manager console to Windows Admin
Center.
To create a failover cluster by using Windows Admin Center, follow these steps:
1. Under All Connections, select Add.
2. Select Failover Connection.
3. Enter the name of the cluster, and if prompted, enter the credentials to use.
4. Add the cluster nodes as individual server connections.
5. Select Submit to finish.
Demonstration: Create a failover cluster and review the
validation wizard
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
Configure roles
When you’ve created a failover cluster infrastructure, assigned nodes, and configured quorum and
witness options, it’s time to add a role to the cluster you created. Up to this point, you still don’t
have any highly available services running. As you learned earlier in this module, you should first
establish the clustering infrastructure before making any specific service (role) highly available.
Not every Windows server role can be configured in a cluster. Also, some roles that you want to
place in a cluster require that another role be installed as a prerequisite.
254
20740 Installation, Storage, and Compute with Windows Server
Table 1 lists the supported Windows Server roles for clustering, and roles and features that need
to be installed as a prerequisite:
Table 20: Windows Server clustered roles
Clustered role
Prerequisites
Namespace Server
Namespaces (part of the File Server role)
Distributed File System (DFS) Namespace Server
Dynamic Host Configuration Protocol (DHCP) Server
role
Distributed Transaction Coordinator (DTC)
None
File Server
File Server role
Generic Application
Not applicable
Generic Script
Not applicable
Generic Service
Not applicable
Hyper-V Replica Broker
Hyper-V role
iSCSI Target Server
iSCSI Target Server (part of the File Server role)
iSNS Server
iSNS Server Service feature
Message Queuing
Message Queuing Services feature
Other Server
None
VM
Hyper-V role
WINS Server
WINS Server feature
255
20740 Installation, Storage, and Compute with Windows Server
To add a Windows Server role as a role in a cluster, follow these steps:
1. Use Windows Admin Center, Server Manager, or Windows PowerShell to install the required
role to all nodes that participate in a cluster.
2. Open the Failover Cluster Manager console, expand the cluster that you created earlier, rightclick or select the context menu, and then select Configure Role. This will start the High
Availability Wizard.
3. Within the High Availability Wizard, follow the steps to configure the role that you selected to
be highly available. The steps in this wizard might vary based on the role you selected.
4. When you’re done, select the Roles pane in Failover Cluster Manager console and verify that
the role you added to a cluster has a status of Running.
As described earlier, only one node in a cluster can be the owner of the clustered role. On the
Roles pane, you can review the current owner of the clustered role. If you want to change the role
owner, right-click or select the context menu of the role, and then select Move. You’re presented
with a list of nodes that can accept ownership for the clustered role. You should select a node and
confirm the ownership move.
Note: Manually moving a clustered role from one node to another is usually done when you
want to perform maintenance tasks on the current role owner. In a failover scenario, the
owner move is performed automatically.
Demonstration: Create a general file-server failover
cluster
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
Manage failover clusters
There are several failover cluster management tasks you can perform after you create a cluster,
such as adding and removing cluster nodes or changing quorum settings. Other common
configuration tasks, most of which you can perform by using the Failover Cluster Management
console, include:

Managing cluster nodes. Allows you to perform a variety of actions for each node in a cluster,
such as stopping or pausing the Cluster service, initiating a remote desktop to it, evicting it
from the cluster, or draining it if you need to perform maintenance or install updates. This
functionality is part of the infrastructure that enables CAU for patching nodes in a cluster.

Managing cluster networks. Add or remove cluster networks and configure networks that will
be dedicated just for intercluster communication.

Managing permissions. Enables you to delegate rights for cluster administration.
256
20740 Installation, Storage, and Compute with Windows Server

Configuring cluster quorum settings. Specify how to achieve quorum and who can vote in a
cluster.

Migrating services and applications to a cluster. Implement existing services to the cluster and
make them highly available.

Configuring new services and applications to work in a cluster. Implement new services to the
cluster.

Removing a cluster. If you’re removing a service or moving it to a different cluster, you first
need to destroy the cluster.
If problems arise in a cluster, you can use Event Viewer to search for events related to failover
clustering. Additionally, you can access information-level events in the Failover Clustering
Operations log, which you can access in Event Viewer in the Applications and Services
Logs\Microsoft\Windows folder. Common cluster operations, such as cluster nodes leaving
and joining the cluster or resources going offline or coming online, are examples of informationlevel events.
Note that Windows Server doesn’t replicate event logs among nodes. However, the Failover Cluster
Manager console has a Cluster Events option that you can use to access and filter events across
all cluster nodes, which helps you synchronize events across cluster nodes.
Also, the Recent Cluster Events option that queries all the error and warning events in the last
24 hours from all nodes is available through the Failover Cluster Manager console.
Configure cluster properties
After you create a cluster and move it into production, you might need to configure cluster
properties, which you can do by using the Failover Cluster Manager console.
You can configure cluster properties by opening the Properties of the cluster object in Failover
Cluster Manager. The tabs available in the properties window include:

General. Displays the name of the cluster and manages cluster group properties. In Cluster
Group properties, you can select preferred owners for the cluster resource group and configure
failover and failback settings.

Resource Types. Allows you to manage current cluster resource types and add new cluster
resource types.

Balancer. Allows you to configure VM balancing

Cluster Permissions. Allows you to configure cluster security permissions.
There are three actions that you can take on the cluster nodes as common management tasks:

Add a node. This action adds a node to an established failover cluster. You do it by selecting
Add Node in the Actions pane of the Failover Cluster Management console, and then you’ll be
prompted to provide node information in the Add Node Wizard.
257
20740 Installation, Storage, and Compute with Windows Server

Pause a node. Pausing a cluster node results in preventing resources from failing over or
moving to that node. You typically pause a node when it’s undergoing maintenance or
troubleshooting.

Evict a node. Evicting a cluster node is an irreversible process, and you should only use it when
a node is damaged to the point of not being fixable or isn’t needed anymore. After you evict the
node, you must add it back to the cluster, by following the procedure for adding a node. It’s
important to note that you can repair or rebuild a damaged node and then use the Add Node
Wizard to add it back to the cluster.
Each of these configuration actions is available in the Actions pane of the Failover Cluster
Management console and in Windows PowerShell.
Configure failover and failback
If you want to switch responsibility between nodes for providing resource access, use failover.
It occurs when an administrator moves resources to another node or if one node’s unplanned
downtime happens because of hardware failure or other issues. Additionally, failover can occur
when a service quits working on an active node.
There are three steps in a failover attempt:
1. All of an instance’s resources are taken offline by the Cluster service, in order of the instance’s
dependency hierarchy, meaning dependent resources are taken offline first and then the
resources on which they depend. An example is a scenario in which an application depends on
a physical disk resource. The Cluster service takes the application offline first, which enables
the application to write changes to the disk before the disk is taken offline.
2. Once all resources are offline, the Cluster service tries to move the instance to the node listed
next on the instance’s list of preferred owners.
3. When the Cluster service moves an instance to another node successfully, it then tries to bring
all resources online, beginning with the lowest part of the dependency hierarchy. Failover is
considered successful when all resources are online on the new node.
If there are instances that were hosted initially on an offline node, the Cluster service can fail them
back when the node becomes active again. When the Cluster service fails back an instance, it
follows the same procedures that it performs during failover, meaning it takes all of the instance’s
resources offline, moves the instance, and then brings all of the instance’s resources online again.
You can configure failover settings such as preferred owners and failback options so that you can
control how a cluster responds when roles or services fail. You can configure these settings when
you open Properties for the clustered service or application. For each role in a cluster, you can
individually set preferred owners, or you can select multiple preferred owners and place them in
any order. Selecting preferred owners provides more control over what node a particular role fails
over to and actively runs on.
258
20740 Installation, Storage, and Compute with Windows Server
Each role for failover and failback has settings that you can change. Failover settings can control
how many times a cluster can try restarting a role in a particular amount of time. In Windows
Server, the default is to allow only one failure every six hours. You can set the failback setting to
Prevent Failback, which is the default, or Allow Failback. When allowing failback, you can set the
role to immediately use failback or use failback during a certain number of hours.
Configure and manage cluster storage
Failover clustering uses different storage configurations depending on the cluster role that you
deploy. Because storage is a critical cluster component, you need to understand storage
management tasks.
In failover clustering, storage-configuration tasks include:


Adding storage spaces. To add storage spaces, you need to configure storage spaces first.
After you configure storage spaces, you must perform the following steps to create clustered
storage spaces:
o
In the Failover Cluster Manager console, expand Cluster Name, expand Storage, select
Pools, and then select New Storage Pool.
o
Follow the wizard instructions to include physical disks in the storage pool. You’ll need at
least three physical disks for each failover cluster.
o
As you proceed through the wizard’s steps, you must choose resiliency options and virtual
disk size.
Adding a disk. Use the Failover Cluster Manager console to add a disk by performing the
following steps:
o
In the Failover Cluster Manager, select Manage a Cluster, and then select the cluster to
which you want to add a disk.
o
Select Storage, select Add a disk and then add the disks from the storage system.
o

Taking a disk offline. In some scenarios, such as for maintenance, you might need to take a
cluster disk offline by performing the following steps:
o

If you need to add a disk to a CSV, you should follow the procedure for adding a disk, but
then also add a disk to the CSV on the cluster.
In the Failover Cluster Manager console, select the appropriate disk, and then select Take
Offline.
Bringing a disk online. After you complete maintenance on the disk, you should bring the disk
online by performing the following steps:
o
In the Failover Cluster Manager console, select the appropriate disk, and then select Bring
Online.
259
20740 Installation, Storage, and Compute with Windows Server
Configure networking
Vital to every cluster’s implementation are networking and network adapters, and you can’t
configure a cluster without establishing the networks that the cluster will use.
Each network that you assign to a failover cluster can have one of the following three roles:

Private network. Used exclusively for communication between internal clusters. Cluster nodes
exchange heartbeats and scan for other nodes. All traffic is authenticated and encrypted.

Public network. A public network provides clients with access to cluster resources. With this
network, clients access a service or application that you made highly available.

Public-and-private network. This type of network, also known as a mixed network, serves for
both internal cluster communication and for clients. You use this type of network when you
don’t have enough resources to separate client communication and inter-cluster
communication.
Besides these types of networks, you’ll also need a separate network for storage access. The
network type will depend on the storage that you’ve deployed.
When you configure networks in failover clusters, you also must dedicate a network to connect to
the shared storage. If you use iSCSI for your shared storage connection, an IP-based Ethernet
communications network is used. However, don’t utilize this network for a node for client
communication. Sharing the iSCSI network in that way might result in contention and latency
issues for users and the resource that the cluster provides.
You can use private and public networks for both client and node communications, and it’s ideal if
you have an isolated, dedicated network for private node communication, similar to how you use a
separate Ethernet network for iSCSI to avoid resource bottlenecks and contention issues. The
public network must be configured to enable client connections to the failover cluster. Additionally,
while you can specify a public network to provide backup for a private network, it’s better to define
alternative networks for the primary private and public networks or team the network adapters that
you use for these networks.
Failover clusters include the following networking features:

The nodes utilize User Datagram Protocol (UDP) unicast to transmit and receive heartbeats,
and messages are sent on port 3343. Please note that they don’t use UDP broadcast, which
was used in legacy clusters.

You can have clustered servers on different IP subnets, so you reduce how difficult it is to
establish multisite clusters.

The Failover Cluster Virtual Adapter, which is a hidden device, gets added to each node by the
installation process when you install the failover clustering feature. The installation process
assigns a media access control (MAC) address to the adapter, based on the MAC address that
is associated with the node’s first enumerated physical network adapter.

Failover clusters support IPv6 fully for node-to-node and node-to-client communication.
260
20740 Installation, Storage, and Compute with Windows Server

You can use DHCP to assign IP addresses or assign static IP addresses to all the cluster’s
nodes. However, you’ll receive an error from the Validate a Configuration Wizard if nodes have
static IP addresses, but you configure others to use DHCP. The configuration of the network
adapter that supports the cluster network forms the basis for how the cluster IP address
resources are obtained.
Configure quorum options
The cluster quorum is critical for the failover cluster to work. By using a quorum, a cluster
determines if it has enough online members to continue serving its clients. As you learned earlier
in this module, you can configure quorum modes in several different ways, depending on the
specific cluster configuration or usage.
In Windows Server 2022, quorum mode can also be changed even after you create a cluster and
put it in production. You just need to make sure that the new quorum mode is appropriate for your
usage scenario. To change your cluster’s quorum configuration, use the Configure Cluster Quorum
Wizard or the failover cluster Windows PowerShell cmdlets.
When you start a wizard to change the quorum mode with the Configure Cluster Quorum Wizard,
you’re presented with three options:

Use typical settings. If you choose this option, the cluster will assign a vote to each node
participating in the cluster and will dynamically manage node votes to maintain the quorum.
If a shared disk is available, a cluster disk will be configured as a witness. For most scenarios,
we recommend this option because the system takes care of most of a cluster’s important
components.

Add or change the quorum witness. With this option, you can reconfigure settings for a cluster
witness. You can configure a witness to be a file share or a witness disk. Depending on your
selection, the cluster automatically assigns a vote to each node or witness and dynamically
manages the node votes.

Advanced quorum configuration and witness selection. This option provides the most flexibility
for a cluster quorum configuration. However, you must be careful to properly configure
available settings. If you select this option, you can add or remove votes for cluster nodes,
witness, and choose if the quorum will be maintained dynamically or not.
Note: There are certain scenarios, such as multisite clusters, where it makes sense
to remove voting rights from one or more cluster nodes. For example, if you have a multisite
cluster, you can remove votes from the backup site’s nodes, so they don’t affect quorum
calculations. We recommend doing this only when you need to use manual failover
across sites.
Demonstration: Configure the quorum
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
261
20740 Installation, Storage, and Compute with Windows Server
Lesson 3: Maintain a failover cluster
After you have your cluster infrastructure running, you should establish monitoring procedures
to prevent failures. Additionally, you should have backup and restore procedures for cluster
configuration. In Windows Server 2022, CAU allows you to update cluster nodes without downtime.
In this lesson, you will learn how to monitor, backup and restore, and update cluster nodes.
By completing this lesson, you’ll achieve the knowledge and skills to describe how to:

Monitor failover clusters.

Back up and restore failover cluster configurations.

Maintain failover clusters.

Manage cluster-network heartbeat traffic.

Perform cluster-aware updating.
Monitor failover clusters
You can use several tools for monitoring failover clusters, including standard Windows Server
OS tools, such as Event Viewer and the Performance and Reliability Monitor snap-in, to review
cluster event logs and performance metrics, and the Tracerpt.exe tool to export data for analysis.
Additionally, troubleshoot issues with hardware and cluster configuration by using the
Multipurpose Internet Mail Extension Hypertext Markup Language (MHTML)-formatted cluster
configuration reports and the Validate a Configuration Wizard.
Event Viewer
If problems arise in a cluster, you can use Event Viewer to search for events related to failover
clustering and access information-level events in the Failover Clustering Operations log, which
you can access in Event Viewer in the Applications and Services Logs\Microsoft\Windows folder.
Informational-level events typically refer to common cluster operations, such as cluster nodes
leaving or joining a cluster or resources going offline or coming online.
Note that Windows Server doesn’t replicate event logs among nodes. However, the Failover Cluster
Manager console has a Cluster Events option that you can use to access and filter events across
all cluster nodes. This feature is helpful in correlating events across cluster nodes.
The Failover Cluster Manager console provides a Recent Cluster Events option, which queries all
error and warning events over a 24-hour period from all the cluster nodes.
Additionally, Event Viewer has several other logs you can review, such as the Debug and Analytic
logs. Display these logs by modifying the top menu’s view by selecting the Show Analytic and
Debug Logs options.
262
20740 Installation, Storage, and Compute with Windows Server
Event tracing for Windows
Event tracing for Windows is a kernel component that’s available after startup and into shutdown.
It allows fast tracing and delivery of events, but only basic in-process event filtering, based on
event attributes.
The event trace log provides detailed data about failover-cluster actions. Use the Tracerpt.exe to
access and review this data in the event trace log, where it parses only event trace logs only on the
node on which it runs. All individual logs are collected in a central location. To transform the XML
file into a text file, or into an HTML file that you can open in Microsoft Edge or Internet Explorer,
you can parse the XML-based file by using the Microsoft XSL parsing command-prompt tool
Msxsl.exe and an XSL-style sheet.
The Performance and Reliability Monitor snap-in
The Performance and Reliability Monitor snap-in checks failover clusters to:

Monitor the performance baseline on each node’s application performance. Review how an
application is performing by reviewing and trending specific data about the system resources
that each node is using.

Monitor the performance baseline on each node’s application failures and stability. Determine
specifically when applications fail and match those failures with other node events.

Modify trace log settings. Start, stop, and adjust trace logs, including how big they are and
where they’re located.
Back up and restore failover-cluster configuration
Configuring clusters can be a time-consuming and detail-oriented process. Because of that, it’s
important that you can back up and restore cluster configuration. You can use Windows Server
Backup or a non-Microsoft backup tool to perform a backup and restore of cluster configurations.
Back up a cluster
When you back up your cluster configuration, it’s important to remember that you must:

Test your backup and recovery process before you put a cluster into production.

Add the Windows Server Backup feature, if necessary, by using Server Manager.
Windows Server Backup is the built-in backup and recovery software for Windows Server 2022.
To back up data successfully, you must:

Ensure the cluster is running and has a quorum.

Back up all clustered applications. However, you also must have a backup plan for the
resources and configuration outside the cluster configuration, such as SQL databases or
Hyper-V VMs.
263
20740 Installation, Storage, and Compute with Windows Server

Make the disks that store data available to the backup software. This applies only if you want
to back up application data. Do this by running the backup software from the cluster node that
owns the disk resource or by performing a backup over the network of clustered resources.

Remember that the cluster service monitors which cluster configuration is the most recent and
then replicates it to all cluster nodes. Additionally, if there’s a witness disk in the cluster, the
Cluster service replicates that configuration to it.
Restore a cluster
You can restore a cluster using two methods:

Nonauthoritative restore. Use if a single cluster node is damaged, but the rest if performing
correctly. When you conduct a nonauthoritative restore, you’re restoring system-recovery
(system state) information to the damaged node. When you restart it, it joins the cluster and
receives the latest cluster configuration automatically from other nodes.

Authoritative restore. Use to roll back the cluster configuration for all nodes. An example of
when you’d use it is if an administrator inadvertently removes clustered resources or modifies
cluster settings, which requires you to revert the cluster to a previous point in time. Perform an
authoritative restore by stopping each node’s cluster resource, and then use the command-line
Windows Server Backup interface to conduct a system recovery (system state) on a single
node. After the restored node restarts the cluster service, the remaining cluster nodes also can
start the cluster service.
Manage and troubleshoot failover clusters
There are several failover cluster management tasks you can perform after you create a cluster,
including adding and removing cluster nodes to change quorum settings, and:

Managing cluster nodes. Stop or pause the Cluster service for each node in a cluster, begin a
remote desktop to the node, or evict a node from the cluster. For example, you might drain
nodes in the cluster so that you can perform maintenance or install updates. This functionality
is part of the infrastructure that enables CAU for patching cluster nodes.

Managing cluster networks. Add or remove cluster networks and set up networks just for
intercluster communication.

Managing permissions. Delegate rights to administer a cluster.

Configuring cluster quorum settings. Specify how a quorum is reached and who has voting
rights.

Migrating services and applications to a cluster. Implement existing services and configure
them for high availability.

Configuring new services and applications to work in a cluster. Implement new services.

Removing a cluster. Perform before you remove or move a service to a different cluster.
Use the Failover Cluster Management console to perform these management tasks.
264
20740 Installation, Storage, and Compute with Windows Server
To troubleshoot a failover cluster, remember to:

Use the Validate a Configuration Wizard to determine which configuration issues might cause
issues for the cluster.

Monitor cluster events and trace logs so you can determine which application or hardware
issues could cause the cluster to become unstable.

Audit hardware events and logs to help pinpoint specific hardware components that might
cause an unstable cluster.

Help identify issues by reviewing SAN components, switches, adapters, and storage controllers.
When troubleshooting failover clusters, remember to:

Collect and document a problem’s symptoms so you can determine the possible issue.

Identify a problem’s scope, which will help you determine what’s the potential cause, what
other components it might impact, and what effect it might have on applications and clients.

Collect information so that you can understand and pinpoint the possible problem accurately.
After developing a list of potential issues, prioritize them by probability or by a repair’s potential
impact. If you can’t identify the problem, you should attempt to recreate the problem.

Create a repair schedule. For example, if only a small group of users is affected by the issue,
delay repairs until off-peak hours and schedule the downtime.

Complete and test each repair individually so you can pinpoint the correct fix.
Manage cluster-network heartbeat traffic
Each node in a cluster uses heartbeat traffic to check for other nodes’ presence and health. If one
node cannot communicate with another over a network, the communicating nodes will initiate a
recovery action to bring applications, services, and data online.
Windows Failover Clustering health-monitoring configuration has default settings that are
optimized for different failure scenarios. It’s possible to modify those settings to meet
requirements for a specific type of configuration or high-availability scenarios. There are two types
of network monitoring in failover-clustering scenarios:

Aggressive monitoring. This type of monitoring provides the fastest detection of server failure
and provides fast recovery, which results in the highest level of availability. However, this type
of monitoring initiates a failover procedure even if a short transient network outage occurs,
which does not always indicate that a server has failed.

Relaxed monitoring. With this type of monitoring, you provide more tolerance in network-failure
detection, which means that in some cases of a very short network outage, a cluster’s nodes
do not initiate a failover procedure. However, in this scenario, if a node fails, it might take
longer for other nodes to initiate a failover procedure.
265
20740 Installation, Storage, and Compute with Windows Server
Parameter settings that define network health monitoring include:


Delay. Configure the frequency of cluster heartbeats, such as how many seconds until the next
heartbeat is sent. Delay parameters you can configure include:
o
SameSubnetDelay. A parameter that controls the delay between heartbeats measured in
milliseconds, for nodes located on the same subnet.
o
CrossSubnetDelay. A parameter that controls the time interval, measured in milliseconds,
that the cluster network driver waits between sending Cluster Service heartbeats across
subnets.
o
CrossSiteDelay. A parameter that controls the delay between heartbeats, measured in
milliseconds, for nodes located in different sites.
Threshold. This is the number of missed heartbeats before the cluster initiates a failover
procedure. Parameters for threshold that you can configure are:
o
SameSubnetThreshold. A parameter that controls how many heartbeats can be missed by
the nodes located on the same subnet before the network route is declared as
unreachable.
o
CrossSubnetThreshold. A parameter that controls how many Cluster Service heartbeats
can be missed by nodes located in different subnets before a node in one site determines
that the Cluster Service on a node in a different site has stopped responding.
o
CrossSiteThreshold. A parameter that controls the number of missed heartbeats between
nodes in different sites before a node in one site determines that the network interface on
a node in a different site is considered down.
For example, if you configure the CrossSubnetDelay parameter to be 3 seconds and
CrossSubnetThreshold to be 10 heartbeats missed before initiating failover, the cluster will have a
total network tolerance of 30 seconds before it initiates a failover procedure.
To review the configuration of your network health monitoring, you can use the Get-Cluster
Windows PowerShell cmdlet. For example, to list Delay, Threshold CrossSubnet, and SameSubnet
parameters, enter the following at a command prompt, and then select Enter:
Get-Cluster | fl *subnet*
To configure the SameSubnetThreshold parameter with value of 10, enter the following at a
command prompt, and then select Enter:
(Get-Cluster).SameSubnetThreshold = 10
266
20740 Installation, Storage, and Compute with Windows Server
What is Cluster-Aware Updating?
You must carefully apply OS updates to cluster nodes. To ensure zero downtime for a clustered
role, you must manually update cluster nodes one at a time and manually move resources from
the node you’re updating to another one. This procedure can be very time-intensive, but to make
it easier and reduce cluster downtime, use the Cluster-Aware Updating (CAU) feature to
automatically update cluster nodes.
CAU allows you to update cluster nodes automatically, typically with no availability loss. CAU
transparently takes each cluster node offline, installs updates and dependent updates, and then
performs a restart if applicable. It then brings the node back online and begins updating the next
cluster node.
A planned failover occurs, for many clustered roles, because of this automatic update process.
Additionally, for connected clients, it can result in a transient service interruption. However, for
continuously available workloads such as Hyper-V with the Live Migration feature or file server
with SMB Transparent Failover, there’s no impact to service availability when CAU performs cluster
updates.
The process of updating cluster nodes forms the basis for CAU, and it offers two modes through
which it can perform a comprehensive cluster-update operation:

Remote-updating mode. Uses a computer that’s running Windows Server or Windows client
as an orchestrator. Cluster administrative tools must be installed on a computer that you want
to use as a CAU orchestrator, and it can’t be a member of the cluster that it’s updating. To
perform the update with this mode, on-demand updates must be triggered by using a default
or custom Updating Run profile.

Self-updating mode. Uses the CAU clustered role, which must be configured as a workload on
the failover cluster that’s being updated. Additionally, you must define a schedule for updates.
If you use this mode, it’s important to note that CAU doesn’t have a dedicated orchestrator
computer. Also, be aware that the cluster updates at the times you define by using a default or
custom Updating Run profile, and the CAU orchestrator begins on the node that owns the CAU
clustered role, and then sequentially updates each node. An important factor to note is that in
this mode, CAU can use a fully automated, end-to-end process to update a failover cluster, or
you can trigger updates on demand or use remote updates. In this mode, access summary
information about an update that’s occurring by connecting to the cluster and running the
Get-CauRun Windows PowerShell cmdlet.
Demonstration: Configure CAU
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
267
20740 Installation, Storage, and Compute with Windows Server
Lesson 4: Troubleshoot a failover cluster
Failover clustering provides high availability for business-critical applications. Therefore, you need
to learn how to troubleshoot the failover-clustering feature in Windows Server 2022 so that you
can help prevent potential downtime of business-critical applications.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe how to detect communication issues.

Explain how to repair the cluster name object in AD DS.

Describe how to start a cluster with no quorum.

Describe how to review a Cluster.log file.

Describe how to monitor performance with failover clustering.

Describe how to use Event Viewer with failover clustering.

Explain how to interpret the output of Windows PowerShell troubleshooting cmdlets.
Communication issues overview
A network is one of the most critical resources that cluster nodes use to communicate and to
determine other nodes’ health and availability. A reliable network with high performance ensures
healthy cluster applications. Therefore, the appropriate network configuration of cluster nodes,
network infrastructure, and network devices is essential for healthy applications that run in a
failover clustering.
Potential threats to failover clustering include:

Network latency. This is where the network becomes unavailable to cluster nodes, which can
result in failovers or loss of quorum. While latency rarely appears in local area networks (LANs),
it’s common between different sites. However, you can request a service-level agreement (SLA)
from your network provider, guaranteeing an acceptable latency level.

Network failures. This is where network failures result in cluster nodes failing over or losing
quorum, regardless of whether all cluster nodes are running correctly. You typically can control
network failures, but it can be hard to guarantee no network failures happen between different
sites in a stretch-cluster scenario, particularly if a third party provides network
communications. Therefore, you could request an SLA from the applicable network provider
that guarantees an appropriate level of network availability.

Network cards drivers. Problems can occur when network adapters on cluster nodes don’t
have the correct network drivers, and communication issues can result between cluster nodes
and potentially result in frequent failovers or quorum loss. Therefore, you must test and
approve your network adapters and have certified network drivers.
268
20740 Installation, Storage, and Compute with Windows Server

Firewall rules. Sometimes, failover cluster administrators and networking teams don’t
communicate clearly about what kind of ports and the port numbers required by failover
clustering. Therefore, this could result in network communication between cluster nodes
being blocked by a firewall, which leads to the cluster not working correctly. Additionally, if a
networking team reconfigures or replaces a firewall without verifying the ports that must be
open, problems can occur.

Antimalware or intrusion detection software. It’s common that each organization has security
software, such as antimalware and intrusion-detection software, to help guard against security
threats. However, security software can block network communication between cluster nodes
and cause problems with cluster functionality. It’s important that you follow the best practices
that security software vendors recommend and review their technical documentation so that
you correctly configure security software on cluster nodes.
One method you can use to troubleshoot network-related issues in a cluster is to analyze the
Cluster.log file, which is in the C:\Windows\Cluster\Reports\ folder by default.
You can generate the Cluster.log file on each server by using the Get-ClusterLog cmdlet in
Windows PowerShell. The Cluster.log file includes details about the cluster objects, such as
resources, groups, nodes, networks, network adapters, and volumes. This information is useful
when you’re trying to troubleshoot cluster issues. The Cluster.log file in Windows Server 2016 and
newer also includes a verbose log file that you can find under the diagnostic verbose section and
other event channels, such as the system-event log channel.
Repair the cluster name object in AD DS
As you learned earlier, CNO represents a cluster name resource in AD DS. Among other things, this
object changes its associated computer object’s password in AD DS, by default, every 30 days. If
an administrator mistakenly deletes the CNO, or runs a script that deletes the CNO, the computer
object password won’t match the password that is in the cluster database. Because of this
password mismatch, the cluster service cannot sign into the computer object, which causes the
network name to go offline. If the cluster network name isn’t online, Kerberos begins generating
errors because it can’t register in a secure DNS zone.
In this scenario, you should use the Repair Active Directory Object option in the Failover Cluster
Manager to synchronize your password for the cluster’s computer objects. It’s important to note
that the administrator who’s signed in during the repair process must use their credentials when
they’re resetting the password for the computer object. However, to do this, an administrator must
have Reset Password permissions on the CNO computer object.
Additionally, the CNO manages passwords for all other virtual computer objects (VCOs) for the
cluster’s other cluster network names. If the VCO password doesn’t synchronize, the CNO
automatically resets the password and performs the repair process, which means you don’t have
to manually reset it. The automatic repair procedure determines whether the associated VCO
computer object exists in AD DS. If it’s been deleted, you can do the repair process and recreate
the missing computer object. It’s important to note, though, that the VCO automatic-recovery
process can interfere with certain applications. Because of this, we strongly recommend that you
use the AD Recycle Bin feature to recover deleted computer objects, and then use the Repair
269
20740 Installation, Storage, and Compute with Windows Server
function only if the AD Recycle Bin feature doesn’t recover your VCO. However, please note that the
CNO cannot recreate VCO computer accounts if it doesn’t have Create Computer Objects
permissions on the VCO’s organizational unit (OU).
Start a cluster with no quorum
In a failover-clustering configuration, cluster nodes must retain quorum to continue working. If any
failures occur, and the cluster loses quorum, the cluster won’t have enough quorum votes to start.
Therefore, in any failure scenario that includes quorum loss, you should check the cluster-quorum
configuration and perform troubleshooting if the cluster no longer has quorum. You also can verify
quorum information in the Event Viewer system log, where the Event ID 1177 appears.
We recommend that during the cluster-recovery process, you try to reestablish a quorum first and
start the cluster nodes. Certain situations, though, such as a longer power outage in one of the
sites that participate in a cluster, make it difficult to reestablish quorum in a specific period.
Therefore, you’d have to forcibly bring the cluster online and without quorum in the site that has
the smaller number of nodes. This enables the nodes to continue working rather than having to
wait several days until power can be restored in the site where a majority of the nodes reside.
If you run the Windows PowerShell cmdlet Start-ClusterNode with the –FQ switch, it causes
an override of the cluster quorum configuration and starts it in ForceQuorum mode. In this mode,
you can bring back online the majority of downed nodes to rejoin them to the cluster. When this
happens, the cluster switches mode automatically from ForceQuorum to normal, so you don’t have
to restart it.
Using its configuration, the cluster node that forces the cluster to start replicates the configuration
to other nodes. The cluster disregards quorum-configuration settings while operating in
ForceQuorum mode until a majority of nodes are back online. After you start the first cluster node
in the ForceQuorum mode, you must start the other nodes with a setting to prevent quorum by
running the Windows PowerShell cmdlet Start-ClusterNode with the –PQ switch. However,
it's important to note that the cluster service joins an existing cluster when you start a node with
a setting that prevents quorum. If you don’t start a node while preventing quorum, the cluster
service creates a new cluster instance or split cluster. You must remember that if you start a
cluster in ForceQuorum mode, the remaining nodes must be started with a similar setting, too.
Demonstration: Review the Cluster.log file
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
Monitor performance with failover clustering
Monitoring cluster parameters helps administrators learn which system resources a cluster is
using and provides detailed information on cluster scalability and performance. Furthermore,
monitoring helps with troubleshooting in many failure scenarios.
270
20740 Installation, Storage, and Compute with Windows Server
Some of the network-performance counters for failover clustering that you should monitor by using
Performance Monitor include:

Cluster Network Messages. These describe internode communication. Examples include Bytes
Received, Bytes Sent, Messages Received, and Messages Sent.

Cluster Network Reconnections. These describe attempts made to reestablish connection
between cluster nodes. Examples include Normal Message Queue Length, which is a number
that should have a value of 0.

Global Update Manager. This is a component that establishes a consistent state across cluster
nodes. For example, when Database Update Messages creates changes in the cluster
database, and so when you use this component, you can review how many changes were
performed on the cluster database.

Database. This component monitors events when a cluster writes configuration data into the
memory and transaction logs, and then into the database. For example, the Flushes parameter
describes the number of cluster changes that have been flushed from the memory to the
database.

Resource Control Manager. This component allows you to monitor the cluster’s resource state
and manage resource failures. For example:
o
Groups online. Provides information about how many groups are online on the node.
o
Cluster Resources/Resource Failure. Provides information about how many times the
resource has failed.
o
Resources Online. Provides information about how many resources are online on this node.

APIs. Application programing interfaces (APIs) are triggered by external calls. Examples include
Cluster API Calls, Node API Calls, Network API Calls, ClusterAPIHandles, Node API Handles, and
Network API Handles.

Cluster Shared Volumes. Cluster Shared Volumes is a storage architecture that’s optimized for
Hyper-V VMs. Examples include IO Read Bytes, IO Reads, IO Write Bytes, and IO Writes.
Windows PowerShell troubleshooting cmdlets
In organizations that have clusters with a large number of nodes, or that have many different types
of clusters, administration becomes more challenging. Because of this, it’s more efficient for
administrators to use Windows PowerShell to automate the creation, management, and
troubleshooting of clusters.
Some of the more common cmdlets for managing and troubleshooting failover clustering, include:


Get-Cluster. Returns information about a domain’s failover clusters.
Get-ClusterAccess. Returns information about permissions that control access to a failover
cluster.

Get-ClusterDiagnostics. Returns diagnostics for a cluster that contains VMs.

Get-ClusterGroup. Returns information about a failover cluster’s clustered roles (resource
groups).
271
20740 Installation, Storage, and Compute with Windows Server

Get-ClusterLog. Creates a log file for all nodes in a failover cluster or a node that you
specify.

Get-ClusterNetwork. Returns information about one or more networks in a failover cluster.

Get-ClusterResourceDependencyReport. Generates a report that specifies a failover
cluster’s dependencies between resources.

Get-ClusterVMMonitoredItem. Returns the list of services and events being monitored in
the VM.

Test-Cluster. Runs validation tests for failover-cluster hardware and settings.

Test-ClusterResourceFailure. Simulates a failure of a cluster resource.
Lab 9: Implement a failover cluster
Please refer to the online lab to supplement your learning experience with exercises.
Lab 10: Manage a failover cluster
Please refer to the online lab to supplement your learning experience with exercises.
Knowledge check
Check your knowledge by answering these questions:
1. What information do you need for planning a failover cluster implementation?
2. After running the Validate a Configuration Wizard, how can you resolve the network
communication single point of failure?
3. In what situations might it be important to enable failback for a clustered application during a
specific time frame?
Note: To find the answers, refer to the Knowledge check slides in the PowerPoint
presentation.
Learn more
For more information, refer to:

What’s new in Failover Clustering

Understanding cluster and pool quorum

Failover clustering hardware requirements and storage options
272
20740 Installation, Storage, and Compute with Windows Server
Module 9: Implement failover
clustering for Hyper-V virtual
machines
When you implement server virtualization, you’re providing high availability for applications or
services that have built-in, high-availability functionality and for those that don’t provide it any
other way. The Windows Server 2022 Hyper-V technology and failover clustering offer several
ways you can configure high availability. In this module, you’ll learn how you can set up failover
clustering in a Hyper-V environment so that you provide high availability for a virtual environment.
By completing this module, you’ll achieve the knowledge and skills to:

Explain integrating Hyper-V in Windows Server with failover clustering.

Implement and maintain Hyper-V virtual machines (VMs) on failover clusters.

Describe key features for VMs in a clustered environment.
Lesson 1: Overview of integrating Hyper-V in
Windows Server with failover clustering
If you need to ensure your applications and services are highly available, particularly for VMs in a
Hyper-V environment, you should configure the failover clustering feature on your Hyper-V host
computers. This lesson explains the high-availability options for VMs in Hyper-V, how failover
clustering works, and how to correctly set up Hyper-V failover clustering.
By completing this lesson, you’ll achieve the knowledge and skills to describe:

The options for making applications and services highly available.

How failover clustering works with Hyper-V nodes.

Failover clustering with Windows Server Hyper-V features.

Best practices for implementing high availability in a virtual environment.
273
20740 Installation, Storage, and Compute with Windows Server
Options for making applications and services highly
available
There are several options if you want to establish high availability for VMs. Because failover
clustering supports the Hyper-V role, you can implement VMs as a clustered role, which is known
as host clustering. Also, you can implement failover clustering inside VMs just as you would with
physical hosts, which is known as guest clustering. Additionally, you could use Network Load
Balancing (NLB) inside VMs to help protect stateless applications running inside them.
Host clustering
As its name implies, with host clustering you configure a failover cluster with Hyper-V host servers
as cluster nodes. In this scenario, you configure the VM as a highly available resource in a failover
cluster. Failover protection is then achieved at the host-server (Hyper-V server) level. As a result,
VMs and applications that are running within the VMs don’t have to be cluster aware. A VM, with all
apps, becomes highly available, but you don’t implement any high-availability technology inside
VMs. Using this method means you don’t have to worry if a critical app running inside a VM is
supported by failover clustering or not.
For example, a print server role is a non-cluster-aware application. In the case of a cluster node’s
failure, which in this case is the Hyper-V host, the secondary host node takes control and restarts
or resumes the VM as quickly as possible. You can also move the VM from one node in the cluster
to another in a controlled manner. For example, you could move the VM from one node to another,
while patching the host management operating system (OS).
In host clustering deployment, cluster nodes are usually connected to shared storage where VM
files are located. Only one node (Hyper-V host) controls a VM, but other nodes in a cluster can take
over ownership and control very quickly in the case of failure. A VM in such a cluster usually
experiences minimal to zero downtime.
Guest clustering
Guest failover clustering is implemented between VMs running on single or different hosts. This
scenario is configured similarly to physical-server failover clustering, except that the cluster nodes
are VMs. In this scenario, after you create two or more VMs, you enable failover clustering and
configure these VMs as cluster nodes. After that, you configure the required server role as a
clustered role. When deploying guest clustering, you can locate the VMs that are part of a cluster
on a single Hyper-V host. This configuration can be quick and cost-effective in a test or staging
environment, but you need to be aware that in such a scenario the Hyper-V host becomes a single
point of failure. Even if you deploy failover clustering between two or more VMs, if the Hyper-V host
where VMs run fails, then all VMs will also fail.
274
20740 Installation, Storage, and Compute with Windows Server
Because of this, for production environments, you should provide an additional layer of protection
for applications or services that need to be highly available. You can achieve this by deploying the
VMs on separate failover clustering-enabled Hyper-V host computers. When you implement failover
clustering at both the host and VM levels, the resource can restart regardless of whether the node
that fails is a VM or a host. It’s considered an optimal high-availability configuration for VMs
running mission-critical applications in a production environment.
You should consider several factors when you implement guest clustering:

The application or service must be failover cluster aware. This includes any of the Windows
Server roles that are cluster-aware, and any applications, such as clustered Microsoft SQL
Server and Microsoft Exchange Server.

Hyper-V VMs can use Fibre Channel-based connections to shared storage. Alternatively, you
can implement Internet Small Computer System Interface (iSCSI) connections from the VMs
to the shared storage. You can also use the shared virtual hard disk (VHD) feature to provide
shared storage for VMs.
To enable protection on the network layer, you should deploy multiple network adapters on the
host computers and the VMs. You should also dedicate a private network between the hosts and
a network connection that the client computers use.
Network Load Balancing
NLB is a high-availability technology for stateless applications that don’t require shared storage
and aren’t supported with failover clustering. When you deploy NLB in VMs, it works in the same
manner that it works with physical hosts. It distributes IP traffic to multiple instances of a webbased service, such as a web server that’s running on a host within the NLB cluster. NLB
transparently distributes client requests among the hosts, and it enables the clients to access the
cluster by using a virtual host name or a virtual IP address. From the client computer’s perspective,
the cluster is a single server that answers these client requests. As enterprise traffic increases,
you can add another server to the cluster.
Therefore, NLB is an appropriate solution for resources that don’t have to accommodate exclusive
read or write requests. Examples of NLB-appropriate applications include web-based front ends to
database applications or Exchange Server Client Access Servers.
Note: You can’t use NLB to make VMs highly available. You can only use it within VMs for
applications running on VMs, like guest clustering.
When you configure an NLB cluster on VMs, you must install and configure the application on all
VMs that will participate in the NLB cluster. After you configure the application, you install the NLB
feature in Windows Server within each VM’s guest OS and then configure an NLB cluster for the
application. Like a guest cluster across hosts, the NLB resource typically benefits from overall
increased I/O performance when the VM nodes are located on different Hyper-V hosts.
275
20740 Installation, Storage, and Compute with Windows Server
How does a failover cluster work with Hyper-V nodes?
When you configure VMs as highly available resources, the failover cluster treats the VMs like any
other application or service. If failure occurs on one cluster node, failover clustering automatically
acts to restore access to the VM on another host in the cluster. Only one node at a time can run
the VM. However, it’s possible to move the VM to any other node in the same cluster as part of a
planned migration or planned failover. In this scenario, you change which node must provide
access to cluster resources. Typically, this occurs when an administrator makes this change by
moving resources to another node so they can conduct maintenance or other work or when
unplanned downtime of one node occurs because of hardware failure or other reasons.
The failover process, when VMs are configured as a cluster resource, consists of the following
steps:
1. The node where the VM is currently running owns the clustered instance of the VM. It also
controls access to the shared bus or iSCSI connection to the cluster storage and has
ownership of any disks or logical unit numbers (LUNs) that you assign to the VM. As you
learned before, heartbeats are used to check the health of other nodes in a cluster.
2. Failover starts when the node that’s hosting the VM doesn’t send regular heartbeat signals to
the other nodes. By default, this happens when five consecutive heartbeats are missed (or
5,000 ms elapsed). Failover usually occurs because of a node failure or network failure. When
node failure is detected, one of the other nodes in the cluster begins taking over the resources
that the VMs use. Values defined in Preferred Owner and Possible Owners properties are
considered during this process. By default, all nodes are members of Possible Owners. The
Preferred Owner property specifies the hierarchy of ownership if there’s more than one
possible failover node for a resource. If you remove a node as a Possible Owner, this excludes
it from taking over the resource in a failure situation. For example, in a four-node cluster, you
can configure only two nodes as Preferred Owners. In a failover event, the resource might still
be taken over by the third node if neither of the Preferred Owners is online. Although you didn’t
configure the fourth node as a Preferred Owner, if it remains a member of Possible Owners, the
failover cluster uses it to restore access to the resource, if necessary. Resources are brought
online in order of dependency. For example, if the VM references an iSCSI LUN, it stores
access to the appropriate host bus adapters (HBAs), network (or networks), and LUNs in that
order. Failover is complete when all the resources are online on the new node. For clients
interacting with the resource, there’s a short service interruption, which most users might not
notice.
3. You also can configure the cluster service to fail back to the offline node after it becomes
active again. When the cluster service fails back, it uses the same procedures that it performs
during failover. This means that the cluster service takes all the resources associated with that
instance offline, moves the instance, and then brings all the resources in the instance back
online.
276
20740 Installation, Storage, and Compute with Windows Server
Failover clustering features specifically for Hyper-V
Hyper-V with failover clustering was introduced in Windows Server 2008 and, since then, there
have been several improvements. Windows Server 2016 and newer server operating systems
continue to build on the capabilities of Hyper-V with failover clustering by providing updated
features and improvements in the following areas:

Maximum node and VM supported. Failover clustering supports up to 64 nodes and 8,000 VMs
per cluster (and 1024 VMs per node).

File share storage. Windows Server 2022 supports storing VMs on Server Message Block
(SMB) file shares in a file server cluster. This is a way to provide shared storage that’s
accessible by multiple clusters, by providing the ability to move VMs between clusters without
moving the storage. To enable this feature, deploy a file server cluster role and select ScaleOut File Server for application data.

Shared virtual disk. Windows Server 2012 R2 introduced the ability to use a .vhdx as a shared
virtual disk for guest clusters. Windows Server 2016 and newer support improved features to
the shared disks and introduced a new disk format .vhds (VHD Set).

VM configuration version. Windows Server 2016 and newer builds on the rolling upgrades by
not updating the VM’s configuration version automatically. You can now manually update the
VM configuration version, which enables a VM to migrate back and forth between Windows
Server 2016 and Windows Server 2012 R2 until you’ve completed rolling upgrades and you’re
ready to upgrade to the Windows Server 2022 version and use the new features for Windows
Server 2022 Hyper-V.
Best practices for implementing high availability in a
virtual environment
After you determine which applications you’ll deploy on highly available failover clusters, you need
to plan and deploy the failover clustering environment. Apply the following best practices when you
implement the failover cluster:

Plan for failover scenarios. Similar to other failover clustering scenarios, you should ensure
that you include the hardware capacity that’s required when hosts fail. For example, if you
deploy a six-node cluster, you need to determine the number of host failures that you want to
accommodate. If you decide that the cluster must sustain the failure of two nodes, then the
four remaining nodes must have the capacity to run all the VMs in the cluster. VMs can be
quite demanding for hardware resources, so this requires very careful planning.

Plan the network design for failover clustering. You should always dedicate a fast network
connection for internode communication. Additionally, we recommend that it’s logically and
physically separate from the network segment (or segments) that clients use to communicate
with the cluster. This network is usually also used for the transfer of VM memory during a live
migration. If you use iSCSI for any VMs, ensure that you also dedicate a network connection to
the iSCSI network connection. The same applies if you use SMB 3.0 shares for VMs.
277
20740 Installation, Storage, and Compute with Windows Server

Plan the shared storage for failover clustering. For most failover cluster implementations, the
shared storage must be highly available. This also applies for VMs as a cluster resource. If the
shared storage fails, the VMs will all fail, even if the physical nodes are functional. To ensure
storage availability, you need to have redundant connections to the shared storage, and
Redundant Array of Independent Disks (RAID) on the storage device. If you decide to use a
shared VHD, ensure that you locate the shared disk on a highly available resource such as a
Scale-Out File Server.

Use the recommended failover cluster quorum mode. For failover clustering in Windows Server
2022, the default is dynamic quorum mode and dynamic witness. You shouldn’t modify the
default configuration unless you understand the implications of doing so.

Deploy standardized Hyper-V hosts. To simplify the deployment and management of the
failover cluster and Hyper-V nodes, develop a standard server hardware and software platform
for all nodes.

Develop standard management practices. When you deploy multiple VMs in a failover cluster,
you increase the risk that a single mistake might shut down a large part of the server
deployment. For example, if an administrator accidentally configures the failover cluster
incorrectly and the cluster fails, all VMs in the cluster will be offline. To avoid this, develop
and thoroughly test standardized instructions for all administrative tasks.
Lesson 2: Implement and maintain Hyper-V
VMs on failover clusters
Implementing highly available VMs is different from implementing other roles in a failover cluster.
Failover clustering in Windows Server 2022 provides many features for Hyper-V clustering and
other tools for VM high-availability management. In this lesson, you’ll learn how to implement
highly available VMs.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe the components of a Hyper-V cluster.

Describe the prerequisites for implementing Hyper-V failover clusters.

Implement Hyper-V VMs on a failover cluster.

Configure Clustered Shared Volumes (CSVs).

Configure a shared VHD.

Implement Scale-Out File Servers for VMs.

Describe considerations for implementing Hyper-V VMs in a cluster.

Explain how to maintain and monitor VMs in clusters.

Implement failover clustering.
278
20740 Installation, Storage, and Compute with Windows Server
Components of Hyper-V clusters
Hyper-V as a role has some specific requirements for cluster components. To have a Hyper-V
cluster in production, you need to have at least two physical hosts. Other supported clustered
roles, including Dynamic Host Configuration Protocol (DHCP) or a file server, enable allow nodes
to be VMs. However, we recommend that Hyper-V nodes are physical servers, in most production
environments.
Windows Server 2022 allows you to enable nested virtualization, which enables you to configure
a Hyper-V host by using a guest VM. This allows you to simulate clustering scenarios as with
physical servers, and by using two guest VMs to create a guest cluster with Hyper-V. However,
for production environments, we don’t recommend using this approach.
Hyper-V clusters also require that you have physical and virtual networks. Failover clustering
requires a network interface for internal cluster communication and another for clients. We
recommend that you implement a network for storage communication separately, depending on
the type of storage you use. You also need to create virtual networks for clustered VMs. It’s vital
that the same virtual networks are created on all physical hosts that are in a cluster. Failure to do
so can cause a VM to lose network connectivity when moved from one host to another.
Storage is a critical component of VM clustering. VM files are stored on shared cluster storage,
and any storage that Windows Server 2022 failover clustering supports is a good option. We
recommend that you configure storage as a CSV.
VMs are components of a Hyper-V cluster when you implement host clustering, and in Failover
Cluster Manager, it’s possible to implement new highly available VMs or make existing VMs highly
available. However, for either scenario, your VM storage must be on shared storage that both
nodes can access. It’s important to note that not all of your VMs have to be highly available and
that in the Failover Cluster Manager, you can choose which VMs are included in a cluster
configuration.
Prerequisites for implementing Hyper-V failover clusters
To deploy a Hyper-V cluster, you must ensure that you meet the hardware, software, account, and
network-infrastructure requirements. The following sections detail these requirements.
You must have the following hardware for a two-node failover cluster:

Server hardware. As discussed previously, Hyper-V on Windows Server 2022 requires an x64based processor, hardware-assisted virtualization, and hardware-enforced Data Execution
Prevention (DEP). As a best practice, the servers should have very similar or the same
hardware.
279
20740 Installation, Storage, and Compute with Windows Server
Note: Microsoft supports a failover cluster solution only if all the hardware features are
marked as Certified for Windows Server. Additionally, the complete configuration (servers,
network, and storage) must pass all tests in the Validate This Configuration Wizard, which
is part of the Failover Cluster Manager.

Network adapters. The network adapter hardware, like other features in the failover cluster
solution, must be marked as Certified for Windows Server. To provide network redundancy, you
can connect cluster nodes to multiple networks. Alternatively, to remove single points of
failure, you can connect the nodes to one network that uses the following hardware:
o
Redundant switches
o
Redundant routers
o
Teamed network adapters
o
Any similar hardware
We recommend that you configure multiple physical network adapters on the host computer
that you configure as a cluster node. One network adapter should connect to the private
network that the inter-host communications use. Make sure that network adapters are the
same in all hosts, have the same drivers and firmware version, and have the same method of
assigning IP addresses. Domain Name System (DNS) is also required on the network, and we
recommend that you configure it to use dynamic updates.

Storage adapters. If you use serial-attached SCSI or Fibre Channel, the mass-storage device
controllers in all clustered servers should be identical and should use the same firmware
version. If you’re using iSCSI, you need to dedicate one or more network adapters to the cluster
storage for each clustered server. The network adapters that you use to connect to the iSCSI
storage target need to be identical, and you need to use a Gigabit Ethernet or a faster network
adapter.

Storage. Storage that will be used for the cluster needs to be certified for Windows Server
2022. If you use a witness disk in your cluster configuration, the storage must contain at least
two separate volumes. One volume serves as the witness disk, and additional volumes contain
the VM files that cluster nodes share. Storage considerations and recommendations include
the following:
o
Use basic disks, not dynamic disks. Format the disks with the New Technology File System
(NTFS).
o
Use the master boot record (MBR) or GUID partition table (GPT). Because of the 2-terabyte
(TB) limit on the MBR disks, we recommend that you use GPT volumes for storing virtual
disks.
o
If you use a storage area network (SAN), the Miniport driver that the storage uses must
work with the Microsoft Storport storage driver.
o
Consider using Microsoft Multipath I/O (MPIO) software. If your SAN uses a highly available
network design with redundant components, deploy failover clusters with multiple host-bus
adapters by using MPIO. It offers the highest level of redundancy and availability.
o
For environments without direct access to SAN or iSCSI storage, consider using shared
VHDs.
280
20740 Installation, Storage, and Compute with Windows Server
Regarding software requirements, Hyper-V clusters are no different from other types of clusters.
We discussed these requirements in previous modules.
For Hyper-V clusters, servers in the cluster should be in the same Active Directory Domain Services
(AD DS) domain. The recommended role is a member server; you shouldn’t be putting domain
controllers in a cluster configuration.
It’s important to note that you must sign into a domain with an account that includes administrator
permissions for all cluster servers when you’re first setting up a cluster or adding servers to an
existing cluster. In addition, if the account isn’t a Domain Admins account, the account must have
the Create Computer Objects permission in the domain.
Implement Hyper-V VMs on a failover cluster
To implement failover clustering for Hyper-V, you must complete the following high-level steps:
1. Install and configure the required versions of Windows Server 2022. After you complete the
installation, configure the network settings, join the host computers to an AD DS domain, and
then configure the connection to the shared storage.
2. Configure the shared storage. Connect host servers that will participate as cluster nodes to
shared storage. Then, use Disk Manager to create disk partitions on the shared storage.
3. Install the Hyper-V and Failover Clustering features on the host servers. You can use Server
Manager, Windows Admin Center, or Windows PowerShell to do this.
4. Validate the cluster configuration. The Validate This Cluster Wizard checks all the prerequisite
required components for cluster creation and then displays warnings if any don’t meet cluster
requirements. Before you continue, resolve any issues that the Validate This Cluster Wizard
identifies.
5. Create the cluster. You can create a cluster once your configuration passes the Validate This
Cluster Wizard test. You need to designate a name for the cluster you create and an IP
address. AD DS requires that name when you create a computer object, or a cluster name
object (CNO), and then register the IP address in DNS.
Note: You must create a cluster and add available storage to it before you can set up
Clustered Shared Storage. If you need to use CSV, configure CSV before proceeding to the
next step.
6. Create a VM on one of the cluster nodes. When you create the VM, ensure that all files
associated with the VM—including both the VHD and VM configuration files—are stored on the
shared storage. You can create and manage VMs in either Hyper-V Manager or Failover Cluster
Manager. We recommend that you use the Failover Cluster Manager console for creating new
VMs, because with this approach, the VM is automatically highly available.
281
20740 Installation, Storage, and Compute with Windows Server
7. Make the VM highly available only for existing VMs. If you created a VM before implementing
failover clustering, you must manually make it highly available. To make the VM highly
available, in the Failover Cluster Manager, select the option to create a new highly available
service or application. Failover Cluster Manager then presents a list of services and
applications that can be made highly available. When you select the option to make VM highly
available, you can select the VM that you created on shared storage.
Note: When you make a VM highly available, you notice a list of all VMs that are hosted on
all cluster nodes, including VMs that aren’t stored on the shared storage. If you make a VM
that isn’t located on shared storage highly available, you receive a warning, but Hyper-V
adds the VM to the services and applications list. However, when you try to migrate the VM
to a different host, the migration will fail.
8. Test VM failover. After you make the VM highly available, you can migrate the computer to
another node in the cluster. You can select to perform a Quick Migration or a Live Migration.
In most cases, you should perform a Live Migration to reduce downtime.
Configure Cluster Shared Volumes (CSVs)
CSVs in a Windows Server 2022 failover cluster allow multiple cluster nodes to have read-write
access simultaneously to the same disk. This disk should be provisioned as an NTFS volume, and
Windows Server 2022 failover cluster adds them as storage to the cluster. When using CSVs as
cluster shared storage, clustered roles can fail over from one node to another more quickly. This is
because there’s no need to change drive ownership or to dismount and remount a volume on
another cluster node. CSVs also help in simplifying the management of a potentially large number
of LUNs in a failover cluster.
CSVs provide a general-purpose, clustered file system which is based on NTFS. Windows Server
supports using CSVs for Hyper-V clusters and Scale-Out File Server clusters.
Using CSVs isn’t mandatory for Hyper-V failover clusters. You also can create clusters on Hyper-V
by using the standard approach for storage (with disks that you don’t assign as CSVs). However,
you might find usage of CSVs beneficial, because they provide the following advantages:

Reduced LUNs for the disks. You can use CSVs to reduce the number of LUNs that your VMs
require. When you configure a CSV, you can store multiple VMs on a single LUN, and multiple
host computers can access the same LUN concurrently.

Improved use of disk space. Instead of placing each .vhd file on a separate disk with empty
space so that the .vhd file can expand, you can store multiple .vhd files on the same LUN.

Single location for VM files. You can track the paths of .vhd files and other files that VMs use.
Instead of using drive letters or GUIDs to identify disks, you can specify the path names. When
you implement a CSV, all added storage displays in the \ClusterStorage folder. The
\ClusterStorage folder is created on the cluster node’s system folder, and you can’t move it.
This means that all Hyper-V hosts that are members of the cluster must use the same drive
letter as their system drive, or VM failovers fail.
282
20740 Installation, Storage, and Compute with Windows Server

No specific hardware requirements. There are no specific hardware requirements to implement
CSVs. You can implement CSVs on any supported disk configuration, and on either Fibre
Channel or iSCSI SANs.

Increased resiliency. CSVs increase resiliency because the cluster can respond correctly even if
connectivity between one node and the SAN is interrupted, or if part of a network is down. The
cluster reroutes the CSV traffic through an intact part of the SAN or network.
We recommend that you configure CSV storage for a failover cluster before you make any VMs
highly available. However, you can also convert a VM from regular disk access to CSV after
deployment.
Before you can add storage to the CSV, the LUN must be available as shared storage to the
cluster. When you create a failover cluster, all the shared disks that you configured in Server
Manager are added to the cluster, and you can add them to a CSV. Additionally, you have the
option to add storage to the cluster after you create the cluster. If you add more LUNs to the
shared storage, you must first create volumes on the LUN, add the storage to the cluster, and
then add the storage to the CSV.
The following considerations apply for conversion from regular disk access to CSV after
deployment:

The LUN’s drive letter (or mount point) is removed when you convert from regular disk access
to the CSV. This means that you must re-create all VMs that you stored on the shared storage.
If you must keep the same VM settings, consider exporting the VMs, switching to a CSV, and
then importing the VMs in Hyper-V.

You can’t add the shared storage to a CSV if it’s in use. If you have a running VM that uses a
cluster disk, you must shut down the VM, and then add the disk to the CSV.
Configure a shared VHD
The standard approach to creating a guest cluster included the need to expose shared storage to
the VM. You could connect a VM to shared storage by using a virtual Fibre Channel interface or by
using iSCSI. In some scenarios, this wasn’t easy to achieve- for example, if you didn’t have the
support of appropriate drivers for virtual Fibre Channel, or if you didn’t have iSCSI support on the
storage. Additionally, if VMs are hosted at a hosting provider, administrators might not want to
expose a storage layer to the VM users or tenant administrators.
To address these issues, Windows Server 2016 and newer support an additional layer of
abstraction for VM cluster storage. It’s possible to share a VHD (in .vhdx or .vhds format only)
between two or more VMs, and then use that VHD as a shared storage for guest clusters. You
can use the shared VHD as a witness disk or as a data disk in a cluster.
283
20740 Installation, Storage, and Compute with Windows Server
How does a shared VHD work?
Shared VHDs are added as an SCSI drive in the VM settings. The disks appear as virtual serialattached SCSI disks in the VM. You can add a shared VHD to any VM with a supported guest OS
running on a Windows Server 2016 or newer Hyper-V platform. With shared VHD, the guestclustering configuration is greatly simplified, as you have several options for providing shared
storage for guest clusters. Besides shared VHD, you can also use Fibre Channel, SMB, Storage
Spaces, and iSCSI storage. Shared VHDs can be used to provide storage for solutions such as
SQL Server databases and file server clusters.
How to configure shared VHDs
To configure a guest failover cluster that uses shared VHDs, you need to use Windows Server
2016 or newer OS for guest operating systems. Also, ensure that sufficient memory, disk, and
processor capacity within the failover cluster is provided to support multiple VMs implemented
as guest failover clusters
When you decide to implement shared VHDs as storage for guest clusters, you must first decide
where to store the shared VHD. You can deploy the shared VHD at the following locations:

CSV location. In this scenario, all VM files, including the shared. vhdx or .vhds files, are stored
on a CSV configured as shared storage for a Hyper-V failover cluster.

Scale-Out File Server SMB 3.0 share. This scenario uses SMB file-based storage as the
location for the shared. vhdx or .vhds files. You must deploy a Scale-Out File Server and create
an SMB file share as the storage location. You also need a separate Hyper-V failover cluster.
Note: You shouldn’t deploy a shared VHD on an ordinary file share or a local hard drive on
the host machine. You must deploy the shared VHD on a highly available location.
You can configure a shared VHD in a Windows Server Hyper-V cluster when you use the Failover
Cluster Manager GUI, or by using Windows PowerShell. If you use a .vhdx, there are extra steps
required to create the guest shared virtual disk to enable Hyper-V and failover cluster to know that
the .vhdx is a shared disk. However, with the .vhds format introduced in Windows Server 2016 and
newer, you don’t need to perform those steps, and the process is simplified.
When you use Hyper-V Manager, you can create a VHD using the .vhds. We recommend that you
always attach VHDs on a separate virtual SCSI adapter than the virtual disk with the OS. However,
you can connect to the same adapter when running a Generation 2 VM.
Note: Adding virtual SCSI adapters requires the VM to be offline. If you already added the
SCSI adapters, you can complete all other steps while the VM is online.
You can add shared VHD to VMs by using Failover Cluster Manager or by using Windows
PowerShell.
284
20740 Installation, Storage, and Compute with Windows Server
To add a shared VHD by using Windows PowerShell, you should use the Add-VMHardDiskDrive
cmdlet with the –ShareVirtualDisk parameter. You must run this command under administrator
privileges on the Hyper-V host for each VM that uses the shared .vhds file.
For example, if you want to create and add a shared VHD (Data1.vhds) that’s stored on volume 1
of the CSV to two VMs named VM1 and VM2, you use the following commands in Windows
PowerShell:
New-VHD –Path C:\ClusterStorage\Volume1\Data1.vhds –Dynamic –SizeBytes
127GB
Add-VMHardDiskDrive -VMName VM1 -Path
C:\ClusterStorage\Volume1\Data1.vhds –ShareVirtualDisk
Add-VMHardDiskDrive -VMName VM2 -Path
C:\ClusterStorage\Volume1\Data1.vhds –ShareVirtualDisk
In addition, if you want to add a shared VHD (Witness.vhdx) that’s stored on an SMB file share
(\\Server1\Share1) to a VM that’s named VM2, you should use the following command in
Windows PowerShell:
Add-VMHardDiskDrive -VMName VM2 -Path \\Server1\Share1\Witness.vhds ShareVirtualDisk
Implement Scale-Out File Servers for VMs
As you learned earlier, VMs storage is a critical component of highly available Hyper-V solution.
Besides using a host or guest clustering, now you can also store VM files on a highly available SMB
3.0 file share. When using this approach, you achieve storage high availability not by clustering
Hyper-V nodes or clustering VMs, but by clustering file servers that host VM files on their file
shares. With this new capability, Hyper-V can store all VM files, including configuration, files, and
checkpoints, on highly available SMB file shares.
What is a Scale-Out File Server?
A Scale-Out File Server provides continuously available storage for file-based server applications.
You configure a Scale-Out File Server by creating a File Server role on a failover cluster and
selecting the Scale-Out File Server for application data option instead of File Server for general
use. This requires the use of a CSV for the storage of data.
The Scale-Out File Server is different from the file server clusters in several ways. An ordinary file
server cluster serves the clients only by using one node at a time, which is a classic active-passive
configuration. Unlike this, Scale-Out File Server can engage all nodes simultaneously, working in
active-active mode. As a result, by adding nodes to the failover cluster running the File Server role
with the Scale-Out File Server feature, the performance of the entire cluster increases. Because of
this fact, it’s now possible to store resources such as databases or VM hard disks on the folder
shares hosted on the Scale-Out File Server.
285
20740 Installation, Storage, and Compute with Windows Server
The key benefits of using a Scale-Out File Server are:

Active-active clustering. When all other failover clusters work in an active-passive mode, a
Scale-Out File Server cluster works in a way that all nodes can accept and serve SMB client
requests.

Increased bandwidth. In previous versions of Windows Server, the bandwidth of the file server
cluster was constrained to the bandwidth of a single cluster node. Because of the active-active
mode in the Scale-Out File Server cluster, you can have much higher bandwidth, which you can
additionally increase by adding cluster nodes.

CSV Cache. Because the Scale-Out File Server clusters use CSVs, they also benefit from the
use of the CSV Cache. The CSV Cache is a feature that you can use to allocate system memory
(RAM) as a write-through cache. The CSV Cache provides caching of read-only unbuffered I/O.
This can improve performance for applications such as Hyper-V, which conducts unbuffered
I/O when accessing a .vhd file. It’s possible to allocate up to 80 percent of the total physical
RAM for CSV write-through cache. The total physical RAM that a CSV write-through cache
consumes is from nonpaged pool memory.

Abstraction of the storage layer. When you use a Scale-Out File Server as the storage location
for virtual disks, you can migrate live VMs from cluster to cluster. You don’t need to migrate the
storage, provided the URL location is accessible from the destination cluster.
To implement a Scale-Out File Server, you must meet the following requirements:

One or more computers running Windows Server 2016 or newer with the Hyper-V role installed.

One or more computers running Windows Server 2016 or newer with the File and Storage
Services role installed.

An AD DS infrastructure.
Before you implement VMs on an SMB file share, you need to set up a file server cluster. To do
that, you must have at least two cluster nodes with file services and failover clustering installed.
In Failover Cluster Manager, you must create a file server and select the Scale-Out File Server for
application data configuration. After you configure the cluster, you must deploy the SMB Share
– Applications profile. This profile is designed for Hyper-V and other application data. After you
create the share, you can use the Hyper-V Manager console to deploy new VMs on the SMB file
share, or you can migrate existing VMs to the SMB file share when you use the Storage Migration
method.
Considerations for implementing Hyper-V clusters
By implementing failover clustering on servers with the Hyper-V feature installed, you can make
VMs highly available. However, this adds significant cost and complexity to a Hyper-V deployment.
You must invest in additional server hardware to provide redundancy, and you need to implement
or have access to a shared storage infrastructure.
286
20740 Installation, Storage, and Compute with Windows Server
Use the following recommendations to ensure that the failover clustering strategy meets the
organization’s requirements:

Identify the applications or services that require high availability. Not all applications need to
be highly available. So, unless you have the option of making all VMs highly available, you need
to develop priorities for which applications you’ll make highly available.

Identify the application components that must be highly available to make the applications
highly available. Some applications can run on a single server only. If that’s the case, you only
need to make that server highly available. Other applications might require that several servers
and other components (such as storage or the network) be highly available.

Identify the application characteristics. This includes:


o
Is virtualizing the server that runs the application an option? A virtual environment isn’t
supported or recommended for certain applications.
o
What options are available for making the application highly available?
o
What are the performance requirements for each application?
Identify the capacity that’s required to make the Hyper-V VMs highly available. As soon as you
identify all the applications that you must make highly available by using host clustering, you
can start to design the actual Hyper-V deployment.
Live Migration is one of the most important aspects of Hyper-V clustering, as it enables zero
downtime migration of running VMs from one Hyper-V node to another. However, when
implementing Live Migration, ensure that you can meet all requirements for using this
technology. This includes verifying server hardware, dedicated network adapters, and
compatible network configuration.
Maintain and monitor VMs in clusters
Although failover clusters provide high availability, it’s essential to monitor the roles configured in
a cluster and act when there’s an issue with role availability. Hyper-V VMs are one of the most
common cluster roles, so it’s important to have a proper monitoring strategy in place, not just for
VMs, but also for applications running in these VMs. This is because highly available VMs are
mostly used to establish high availability for some applications running inside a VM, not for the
VM itself. If you don’t monitor critical applications running in VMs, you might get a false impression
that everything works fine, if a VM is up and running.
Failover clustering in Windows Server 2022 can monitor and detect application health for
applications and services that run inside a VM. If a service in a VM stops responding or an event
is added to the System, Application, or Security logs, the failover cluster can take actions such as
restarting the VM or failing it over to a different node to restore the service. For this to work, you
need to have both the failover cluster node and the VM run Windows Server 2016 or a later
version and have VM integration services installed.
287
20740 Installation, Storage, and Compute with Windows Server
VM monitoring can be configured by using the Failover Cluster Manager or by using Windows
PowerShell. You can verify the monitoring configuration on the Settings tab of the VM resource
properties. When you select the VM cluster role, then select More actions, and then select
Configure Monitoring, you can enable monitoring of any specific services that run on the VM
and the action to take.
Note: When you configure a service inside a VM to be monitored, the failover cluster will act
only if a service stops responding, and if, in the Services Control Manager inside the VM’s
OS, you have configured the service with the Take No Actions recovery setting.
Windows Server 2022 also can monitor the failure of VM storage and loss of network connectivity
with a technology called network health detection. Storage failure detection can detect the failure
of a VM boot disk or any other VHD that the VM uses. If a failure happens, the failover cluster
moves and restarts the VM on a different node.
For network failures, you can configure a virtual network adapter to connect to a protected
network. If Windows Server loses network connectivity to such a network because of reasons such
as physical switch failure or a disconnected network cable, the failover cluster will move the VM to
a different node to restore network connectivity.
Demonstration: Implement failover clustering with Hyper-V
For high-level demonstration steps, refer to the Notes, which are in the accompanying Microsoft
PowerPoint presentation.
Lesson 3: Key features for VMs in a
clustered environment
For VMs in a clustered environment, Network Health Protection and Drain on Shutdown are two
key features that the failover clustering feature uses to help increase high availability. This lesson
explains the configuration of these key features, and how they help to increase virtual high
availability during unexpected and expected outages.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe Network Health Protection.

Explain the actions taken on VMs when a host shuts down.

Explain the drain on shutdown.

Configure the drain on shutdown.
288
20740 Installation, Storage, and Compute with Windows Server
Overview of Network Health Protection
Network Health Protection is a feature available in Windows Server 2016 and newer OS. Although
network teaming should be the first level of redundancy for your server to achieve network high
availability with Hyper-V, there are many scenarios that can cause a network to become
disconnected and create availability outages.
Network Health Protection performs a live migration of a VM from one failover cluster node to
another failover cluster node if network connectivity on a specific network adapter becomes
disconnected. This feature increases the availability of the VM by moving the VM automatically
instead of waiting for manual intervention.
Each VM has a cluster resource that continually checks to ensure that resources are available
on the failover cluster node that’s hosting the VM. This resource checks every 60 seconds, so
sometimes the network disconnection is discovered quickly, and other times it takes up to
60 seconds. After it discovers the disconnect, the resource checks the other nodes to determine
if the resources needed to run the VM are available. If the resources are available, the cluster
resource initiates a live migration to move to another failover cluster node. In many cases, a
network failure requires the VM to wait in a queued state for movement to another failover
cluster node.
Each network adapter per VM can control this feature. By default, the Protected Network setting
is enabled for all virtual network adapters. You can find this property in the advanced configuration
section of the network adapter settings on each VM. This allows you to remove the setting if a
network isn’t important enough to trigger a live migration if communications are lost.
Overview of actions taken on VMs when a host shuts down
There are situations when Hyper-V host shuts down, unplanned. In such cases, it’s important to
know how VMs running on that host will behave. In Windows Server 2012 R2 and later versions,
when a shutdown is initiated on a Hyper-V host machine, the action that’s taken by that VM
depends on the settings for each VM. These options are found in the VM settings by selecting
the Automatic Stop Action tab.
The options for what a VM does when a host shuts down include:

Save the virtual machine state. This option is the first and default option. When selected, the
OS creates a .bin file reserving space for the memory to be saved when placing the VM in a
saved state. If the host begins a shutdown, Hyper-V Virtual Machine Management Service
(VMMS) will begin saving the VM’s memory to the hard drive and placing the VMs in a saved
state.

Turn off the VM. This second option will allow VMMS to turn off the VM in a graceful manner
for Hyper-V and enter an off state. However, the VM OS views this as forced turn off, without
proper shutdown of the OS.
289
20740 Installation, Storage, and Compute with Windows Server

Shutdown the guest OS. Unlike the other two options, this option requires that integrated
services are working properly on the VM, and that specifically, you have selected the Operating
system shutdown option on the guest VM. By utilizing the integrated services, this option
allows the VMMS to trigger a shutdown on the guest machine. When initiated, the VM will
shut down the guest OS and enter an off state.
Note: If the Hyper-V host goes offline unexpectedly, the VMMS process won’t have received
any information about the shutdown, and thus none of these actions will occur. This is only
useful when a shutdown is initiated on a Hyper-V host.
Overview of drain on shutdown
When placing a Hyper-V failover cluster node in a paused state, also referred to as maintenance
mode, live migration is used to migrate all VMs on that node to other nodes in the cluster. This
removes downtime that would usually be required for shutting down a Hyper-V host. However, if a
shutdown was initiated without placing a node in maintenance mode, all VMs would be moved to
another node via a quick migration. This means that the VM would go into a saved state by saving
all activities to disk, moving the VM role, and then resuming the VM. Unlike live migration, this
might cause some downtime for VMs being migrated.
A feature called draining on shutdown resolves this issue. This feature is enabled by default.
A failover cluster configured with drain on shutdown no longer places a VM in a saved state and
then performs a move. Instead, it will drain the roles first by using live migrations instead of quick
migrations. Because live migration is used, it eliminates the downtime created by a shutdown of
the failover cluster node.
Drain on shutdown should be enabled by default. However, to verify this setting, run the following
Windows PowerShell command:
(Get-Cluster).DrainOnShutdown
After running this Windows PowerShell command, you’ll notice one of two options: “1” means it’s
enabled, and “0” means it’s disabled.
Note: We recommend that you drain all roles before shutting down a failover cluster node.
This provides added protection against user error and circumstances if an application or
the OS initiates a shutdown outside of the user’s control. This also doesn’t protect against
abrupt shutdowns of the Hyper-V failover cluster node. If the node is shut down before the
OS initiates a shutdown, the VMs will revert to an off state and begin coming online on
another node.
290
20740 Installation, Storage, and Compute with Windows Server
Demonstration: Configure drain on shutdown
For high-level demonstration steps, refer to the Notes, which are in the accompanying Microsoft
PowerPoint presentation.
Lab 11: Implement failover clustering with
Hyper-V
Please refer to the online lab to supplement your learning experience with exercises.
Knowledge check
Check your knowledge by answering these questions:
1. Why is using shared storage a best practice in Windows Server Hyper-V failover clustering?
2. You have two clusters. One is a Windows Server 2016 cluster (Cluster1), and the other is a
mixed-mode cluster of Windows Server 2012 R2 and Windows Server 2016 (Cluster2) that’s in
the process of upgrading but hasn’t finished. Additionally, you have two VMs named VM1 and
VM2 that occasionally need to migrate back and forth between Cluster1 and Cluster2. Should
you upgrade the configuration version on VM1?
3. What’s the primary benefit of using shared VHDs?
4. What options do you need to enable VMMS to easily shut down a guest OS during a hostinitiated shutdown?
Note: To find the answers, refer to the Knowledge check slides in the PowerPoint
presentation.
291
20740 Installation, Storage, and Compute with Windows Server
Module 10: Create and manage
deployment images
This module provides an overview of the Windows Server image-deployment process and explains
how to create and manage deployment images by using the Microsoft Deployment Toolkit (MDT).
Additionally, it describes different workloads in the virtual machine (VM) environment.
After completing this module, you’ll achieve the knowledge and skills to:

Describe the Windows Server image-deployment process.

Create and manage deployment images by using the MDT.

Describe VM environments for different workloads.
Lesson 1: Introduction to deployment
images
It’s important to understand how to work with images when you’re responsible for managing
deployment in your organization. This involves understanding how to create, store, and manage
images. In this lesson, you’ll cover those concepts.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe images and image-based installation tools.

Create, update, and maintain images.

Describe Windows ADK.

Describe Windows Deployment Services (WDS).

Describe the MDT.
Overview of images
Imaging has been in use for a long time, with early imaging products primarily performing sectorbased imaging. Windows operating systems use .wim files that contain file-based images. When
you’re planning an image-management strategy, there are several factors you need to consider,
including the type of image and number of images, storage requirements, software and device
drivers, and update management.
292
20740 Installation, Storage, and Compute with Windows Server
Type of image
In the early days of disk imaging, most solutions used sector-based imaging. Since Windows Vista,
Microsoft has implemented file-based imaging. This approach has numerous advantages
compared to sector-based imaging, including:
•
Hardware independence.
•
Multiple images in a single file.
•
Single instancing.
•
Offline servicing.
•
Nondestructive deployment.
Windows image file format
The Windows image file format, introduced with Windows Vista, implements file-based image
formats. The following file-based imaging formats are available for Windows 11 and Windows
Server 2022:

Windows image file (.wim). Contains one or more individual volume images, which Figure 1
displays. The Windows image file structure can contain six types of resources, including:
o
Header. Defines the content of the Windows image file.
o
Metadata resources. Defines information about files that you capture.
o
XML data. Defines additional information about the image.
o
File resources. A series of packages that contain captured data.
o
Lookup table. Describes information about the location of file resources in the file.
o
Integrity table. Stores security-hash information used for image verification during
operations.
Note: Each image file has one Metadata resource, Lookup table, XML data, and
Integrity table data field.
Figure 30: The structure of a Windows image file
293
20740 Installation, Storage, and Compute with Windows Server

Virtual hard disk (VHD). Normally, you’d use .vhd files with VMs. However, Windows 11 and
Windows Server both support the ability to start up your physical computers from a .vhd file
on the hard disk. Therefore, instead of installing operating-system (OS) files directly on
the hard drive, you can create a .vhd and (using either Diskpart, Disk Management, or the
Windows PowerShell New-VHD cmdlet) then deploy Windows to the virtual disk as if it were a
physical disk.
Before you capture an image, you must configure it. Your image(s) can contain only an OS, or it can
contain the OS, drivers, apps, and any additional customizations to suit your organizational needs.
There are, broadly, three approaches to image creation. These are:

Thin images. Contains the OS only. After a computer is deployed using this image, additional
provisioning is required to align the computer to your organizational requirements.

Thick images. Contains everything necessary to deploy a computer with a configuration that
aligns to your organizational requirements, including apps, drivers, and customizations.
Reduced post-deployment provisioning is required to align the computer to your organizational
requirements.

Hybrid images. Contains the OS essentials, but also some (but not all) apps and drivers that
are needed to align the computer to your organizational requirements. Post deployment, some
additional provisioning is required to complete the deployment.
Tip: Most organizations work with hybrid images as they provide a foundational image that
you can apply to all computers within the organization, with departmental provisioning
occurring post-deployment.
When you work with images, it’s important to understand that you’ll need to create and manage
two types of image-boot images and install images (sometimes called operating system images).
Boot images
When you perform an installation of Windows or Windows Server, your computer must start up in a
runtime environment known as the Windows Preinstallation Environment (Windows PE). In many
respects, this is a full version of either 32-bit or 64-bit Windows. However, in most situations where
imaging is used for deployments, target computers have no OS installed.
Note: These are referred to as bare-metal computers.
To start up into Windows PE, you must create and deliver a boot image.
Tip: When you start a computer from a boot image, Windows PE loads into memory and
creates a random access memory (RAM) disk to which it assigns the drive letter X. This disk
provides a virtual file system in memory.
294
20740 Installation, Storage, and Compute with Windows Server
After your target computer start from a boot image, you can launch the deployment process for the
install image.
Tip: The Windows installation media contains a default boot image named Boot.wim. Use
this image or customize it for your organizational needs, such as by adding drivers.
Install images
The install image contains the operating systems you want to deploy. Again, the Windows
installation media contains a default image called Install.wim. However, many organizations
choose to modify this image so that it contains some or all drivers, apps, and other customizations.
These modified images are known as Custom images.
Important: Both Boot.wim and Install.wim are located in the Sources folder.
Overview of image-based installation tools
When deploying images, you can work with a variety of tools. These include:

Windows Setup command-line options (Setup.exe). Performs Windows installations using
interactive or unattended installation methods.

Answer file (Unattend.xml). Includes basic Windows Setup configuration data and the minimum
Windows Welcome customizations. You can create the answer file using tools which are part of
Windows Assessment and Deployment Kit (Windows ADK).

Catalog. Contains all available components and packages you can use as a part of the
Unattend.xml answer file. You can modify components and packages through Windows System
Image Manager (SIM) (also part of Windows ADK).

Windows ADK. Contains Windows PE images, which are necessary for customized deployment
of Windows and Windows Server. Also Includes numerous tools for customizing and managing
your deployments.
Occasionally, you might want to modify a previously created .wim file, perhaps by injecting drivers
or adding Windows packages. You also can use the ImageX and Deployment Image Servicing and
Management (DISM) command-line tools or the DISM Windows PowerShell module cmdlets to
service .wim files manually. When you want to deploy your images, you can use the MDT, WDS,
and Microsoft System Center Configuration Manager (Configuration Manager).
295
20740 Installation, Storage, and Compute with Windows Server
ImageX
ImageX.exe is a command-line tool that enables you to manage .wim files and it’s installed through
the Windows ADK for Windows 11. You can run ImageX from:

Within the Windows OS when servicing an image.

Within Windows PE when deploying an image.
Important: DISM has largely replaced ImageX.
DISM
A command-line tool that you can use to service and deploy .wim files, DISM enables you to:

Mount, service, capture, and create .wim files.

Prepare Windows PE images.

Deploy .vhd and .vhdx files.
Tip: A DISM PowerShell module is also available in Windows and Windows Server.
The command-line parameters and the Windows PowerShell cmdlets provide similar functionality.
Table 1 describes the basic commands for imaging:
Table 21: Imaging commands
Task
DISM command-line
parameters
Mount a .wim file for
servicing.
/mount-image
Mount-WindowsImage
Commit changes made to a
mounted .wim file.
/commit-image
Save-WindowsImage
Get information about a
Windows image in a .wim
file.
/get-imageinfo
Get-WindowsImage
Dismount a .wim file.
/unmounts-image
Dismount-WindowsImage
296
PowerShell equivalent
20740 Installation, Storage, and Compute with Windows Server
Task
DISM command-line
parameters
PowerShell equivalent
Add a driver to a mounted
image.
/image:PathToImage /adddriver /driver:PathToDriver
Add-WindowsDriver –Driver
PathToDriverFile –Path
PathToRootDirectoryOfImage
Apply an image to a
specified drive.
/apply-image
Expand-WindowsImage
Capture an image of a drive
into a new .wim file.
/capture-image
New-WindowsImage
Create, update, and maintain images
The following high-level steps summarize the process for creating an install image:
1. Start a reference computer (ideally from across the network) and perform a standard
Windows OS installation.
2. Customize your reference computer, as required.
3. Generalize your reference computer.
4. Capture your reference computer’s Windows OS image and upload it to the WDS server.
Create a capture image
Use a capture image to start a reference computer, and then capture its OS drive and store it
as a .wim file.
Note: A reference computer creates an image you use to deploy an OS to multiple
computers.
Tip: You can install a Windows OS on the reference computer by performing a manual
installation, by using a deployment server, or by applying the standard Install.wim image
using DISM or Configuration Manager.
297
20740 Installation, Storage, and Compute with Windows Server
Customize the reference computer
After installing the Windows OS on your reference computer, you must configure the reference
computer by:

Enabling and configuring any required Windows roles and features.

Installing any required apps.

Configuring all required Windows OS settings.
Generalize the reference computer
Each installation of Windows is associated with a collection of unique identifying numbers, known
as globally unique identifiers (GUIDs). No two computers can share the same GUIDs. For this
reason, before you can capture an image from your reference computer, you must first remove
these GUIDs, which is known as generalizing.
Tip: To perform this task, you use a built-in Windows tool called Sysyprep.exe.
To generalize an image using Sysprep, you must:
1. Open an elevated command prompt.
2. In the command prompt, run the following command:
sysprep /generalize
Capture the reference image
After generalizing the image, you must execute the following steps to capture the image:
1. Restart the reference image by using Pre-Boot Execution Environment (PXE) boot.
Tip: PXE boot enables you to start a computer from a boot image located across the
network on a deployment server.
2. Connect to a session in the server that’s running WDS to download the captured image.
3. Follow the Capture Image Wizard and specify the name of the .wim file that you want to create
with the image from the reference computer.
298
20740 Installation, Storage, and Compute with Windows Server
Storage requirements
Images can get to be quite large, even though file-based images use disk space efficiently. You’ll
need to ensure you have sufficient storage available to store the images you need for your
organization. It’s worth remembering that thin and hybrid images use less space than thick
images.
Number of images
Obviously, the more images you store, the more disk space you’re consuming. It’s also worth
considering that if you maintain multiple images, you need to maintain those images. The more
images you have, the more time and effort is involved in maintaining those images. Broadly
speaking, fewer images mean less maintenance effort is required. When using file-based images,
remember that:

They are hardware-agnostic, which means you require fewer images.

A single image file can contain multiple install images, which, again, means you need fewer
images.
Apps and other software
OS images don’t need to include just the operating systems. You can install most modern apps on
your reference computer before imaging it. However, the more apps that images include, the larger
the images become and the longer they take to deploy. In addition, if apps update, you’ll need to
update the images that contain them.
Deployment of device drivers
It’s worth considering the inclusion of common device drivers used in your organization if you
install images. However, drivers must be maintained. Therefore, you must ensure that your images
contain drivers that are critical to deployment, such as network and mass storage drivers. Then
deploy additional drivers using the MDT or Configuration Manager during image deployment.
Image updates
Inevitably, you’ll need to maintain your images including, from time to time, updating the images.
A solution is that you recreate the image from scratch, introducing an updated OS, drivers, and
apps, as necessary. However, this is a time-consuming approach.
299
20740 Installation, Storage, and Compute with Windows Server
File-based images support offline servicing, which can reduce the time necessary for maintaining
your images. You can service .wim file images at various stages of the deployment process. There
are three basic strategies for image maintenance, including:

Using Windows Setup. Involves using an answer file with Windows Setup when you deploy an
image.
Tip: You can create or modify answer files by using the Windows SIM tool.

Online servicing. Involves deploying the image back to a reference computer, making all of the
necessary changes, and then reimaging and capturing the reference computer.

Offline servicing. Involves using DISM to mount a .wim file in the file system of a working
computer, and then servicing the image. You can add Microsoft Update-based Windows
software updates, drivers, language packs, add or remove folders, files, and enable Windows
features.
Important: Offline servicing typically does not include installing applications.
Use Windows Setup to customize images
You can use Windows Setup to modify an image during distinct phases of the deployment process,
such as when:

Deploying an image to a reference computer for online servicing.

Deploying the image to client machines.
By using an unattended Windows Setup answer file, you can perform several customizations,
including the following servicing operations:

Add or remove drivers.

Add or remove packages.

Add or remove a language pack.

Configure international settings.

Enable or disable Windows features.
Online servicing
You can perform online servicing with the DISM tool or through manual intervention. After
deploying the system to a reference computer, you can:

Add device drivers to the driver store.

Install apps and system components.
300
20740 Installation, Storage, and Compute with Windows Server

Install folders and files.

Test the changes to the image.
After you complete and test the changes, you can recapture the reference system. You can use the
DISM tool to perform these various online operations.
Offline servicing
Offline servicing is available for images that are stored in the .wim file format and that use the
DISM tool for servicing. The DISM enables you to perform many tasks, including:

Mounting, remounting, and unmounting an image in a .wim file for servicing.

Querying information about a Windows image.

Adding, removing, and enumerating drivers.

Adding, removing, and enumerating packages, including language packs.

Enabling, disabling, and enumerating Windows features.

Upgrading to a newer version of Windows.

Enumerating apps and app updates.

Applying the offline servicing section of an unattended answer file.

Updating a Windows PE image.
Windows ADK
You can use Windows ADK to help customize your organization’s deployments. You can create
simple or complex deployment processes using the tools included in Windows ADK.
However, all image-deployment processes have several steps in common, including:

Creation and capture of a reference computer.

Use of that image to build client systems.
By using Windows ADK, you can make even basic deployment tasks more efficient. Typical steps
might include the following:
1. Create the Windows PE media. Use a USB device or a bootable DVD with Windows PE to
capture your image and deploy it after customization. Your process should include:
a. Customizing the image with any necessary drivers and additional packages, such as
Windows RE.
b. Using the makeWinPEMedia /ufd command to create the bootable USB device.
301
20740 Installation, Storage, and Compute with Windows Server
2. Create and edit answer files. To automate the installation, you must create answer files with
the configuration that you want to use, including:
a. Using the installation media to create a catalog file.
b. Creating the answer file for your environment and copying it as Autounattend.xml to the
root of a USB device.
c. Creating a profile that includes the CopyProfile setting and copying the answer-file profile
as CopyProfile.xml to the root directory of the USB device. This enables you to customize
the default user profile.
3. Use the answer file to install a Windows OS on your reference computer:
a. Insert the USB device into the reference computer.
b. Start the computer from the Windows product media. The setup process uses the
Autounattend.xml file to complete the installation.
c. Customize the administrator profile. Ensure that the USB device with the CopyProfile.xml is
plugged in.
4. Capture the image:
a. Use Sysprep to generalize the system.
b. Start the computer from the Windows PE USB device.
5. Use the DISM tool to copy the Windows partition to a network location or external hard drive.
6. Deploy the image to a test computer:
a. Start up the test computer from the Windows PE USB device.
b. Use diskpart to configure the hard drive as required.
c. Use the DISM applyimage command to apply the previously captured image.
d. Ensure that the computer image and profile settings are correct.
Windows ADK contains a range of different tools, including several that are useful for Windows
deployment. These include:

Application Compatibility Toolkit (ACT). Includes the Compatibility Administrator and the
Standard User Analyzer tools. These tools help you determine app compatibility within your
organization.

Deployment Tools. Includes DISM and related command-line tools.

Windows SIM. Enables you to create unattended Windows Setup answer files.

Windows PE. Enables you to start a computer prior to deploying an image to the computer.

Windows Imaging and Configuration Designer (Windows ICD). Enables you to create
provisioning packages. You can use these packages to customize the configuration of your
computers during the out-of-box experience (OOBE) stage of installation, or anytime thereafter.

User State Migration Tool (USMT). Provides a collection of tools that you can use to migrate
user settings and data (known as user state) between computers, typically when migrating to a
new version of Windows.

Volume Activation Management Tool (VAMT). Provides a centralized tool you can use for
managing volume-licensed Microsoft products.
302
20740 Installation, Storage, and Compute with Windows Server

Windows Performance Toolkit. Collects detailed performance profiles of Windows operating
systems.

Windows Assessment Toolkit. Helps you assess a running OS, determine its status, review
results in a report, diagnose problems or issues, and helps you correct the problems or issues.
Windows Deployment Services
WDS is a server role that can help you with more complex deployment scenarios, and it:

Enables you to perform network-based OS installations.

Streamlines the deployment process.

Supports deployment to bare-metal computers.

Supports deployment for both client and server operating systems.

Uses existing deployment technologies, including Windows PE, Windows image file (.wim) and
virtual hard disk (.vhd and .vhdx) image files, and image-based deployment.
WDS enables you to use either unicast or multicast communications over your network. With
unicast, packets are sent to a particular address one at a time. However, with multicast, a group
IPv4 address is used to send packets to multiple computers simultaneously. In large deployment
scenarios, this can significantly improve network throughout and help manage bandwidth during
deployments.
WDS consists of two role services. These are:

Deployment Server. Manages Windows OS deployment solutions, including a PXE component.

Transport Server. Provides basic network services and a PXE listener. The listener forwards the
requests to a PXE provider, which the Transport Server doesn’t include, but is part of the WDS
service.
Note: If you install the Transport Server role service as a standalone component, you must
use an additional management tool, such as Configuration Manager.
WDS provides three management tools. These are:

WDS snap-in. Enables you to perform most WDS tasks through its graphical interface.

WDSUTIL. Provides a command-line interface for scripted operations.

WDS Windows PowerShell cmdlets.
You can install and integrate WDS with Active Directory or install it as a standalone service.
Installing WDS as an Active Directory-integrated service provides the following benefits:

Active Directory provides a data store and enables you to prestage a computer account. During
the deployment process, WDS matches the physical computer to the prestaged computer
object in Active Directory.
303
20740 Installation, Storage, and Compute with Windows Server

Active Directory allows WDS to register as a system services control point. This registration
enables WDS configuration settings to be stored and accessed in Active Directory.
Operating-system components
Components enable you to separate the core functionality of the Windows OS in an image by
adding or removing components at any time. For example, you might create an image containing
the Windows 11 Enterprise OS plus the apps used throughout your organization, and it might be an
appropriate standard for use on all of your computers.
You can save this standard image in a. wim file used by WDS for deployment. When Microsoft
releases updates for Windows 11, you apply these updates to the base .wim file. Using this
component approach means that you don’t have to create new images when updates are
released.
By using the componentized nature of Windows, you can reduce the number and size of images.
The following elements utilize the component infrastructure:

Updates

Service packs

Language packs

Device drivers
Deployment scenarios
WDS supports both lite-touch installation (LTI) and, when combined with additional technologies,
zero-touch installation (ZTI). Whether you use lite-touch or zero-touch installations, WDS enables
you to create a more autonomous and efficient environment for installing Windows.
Deployment over a small network
With a small collection of computers, you can use WDS to optimize deployment. As an example, if
you have 25 computers running Windows 10, you could use WDS to expedite the upgrade process
of the client computers to Windows 11. After you’ve installed and configured the WDS server role
on the single server, you can use WDS to perform the following tasks:
1. Copy Boot.wim from the Windows Server media as a boot image in WDS.
2. Copy Install.wim from the Windows 11 media as an install image.
3. Create a capture image from the boot image that you previously added.
4. Start your reference computer using PXE.
5. Perform a standard installation of Windows 11 from the Install.wim image.
6. Install any apps you require on the reference computer.
7. Generalize the reference computer.
8. Restart the reference computer using PXE.
304
20740 Installation, Storage, and Compute with Windows Server
9. Connect to the capture image, use it to capture the local OS, and then upload it back to the
WDS server.
10. Start all target computers using PXE, and then connect to the appropriate boot image.
11. Select the custom install image, and deployment starts.
Deployment in a medium to large organization
In a medium-to-large size organization, you want to deploy multiple servers in geographically
dispersed branch offices. You want to avoid sending IT staff to each location to deploy the servers.
By using WDS, you can address this issue remotely:
1. Copy Boot.wim from the Windows Server media as a boot image in WDS.
2. Copy Install.wim from the Windows Server media as an install image.
3. Create a capture image.
4. Start the reference server computer from the network.
5. Perform a standard installation of Windows Server from the Install.wim image.
6. Customize the reference server computer as required.
7. Generalize the reference server computer.
8. Restart the reference server computer.
9. Capture the reference Windows Server OS, and then upload it back to the WDS server.
10. Configure the necessary Active Directory computer accounts, which prestages the computer
accounts.
11. Use Windows SIM in Windows ADK to create an answer file.
12. Configure the answer file for use with the captured installation image on WDS.
13. Configure a custom naming policy in WDS so that each server computer receives a suitable
computer name during deployment.
14. Configure WDS to use a default boot image.
15. Configure WDS to respond to PXE requests and start deployment of the install image
automatically.
16. Start each of the target computers from the network.
Microsoft Deployment Toolkit
A common reason to use the MDT in an LTI or ZTI scenario is to create a reference image. In this
situation, you separate the reference-image creation process from the production deployment
process.
305
20740 Installation, Storage, and Compute with Windows Server
The MDT creates the reference image by capturing a reference computer OS into a .wim file.
You can configure a particular computer with all apps and settings you want to deploy to other
computers and then capture it to a .wim file. You then can use the .wim file as a basis of
deployment through the MDT or alter it by adding drivers, packages, and apps by using task
sequences when a deployment occurs.
When preparing to use the LTI method, you can divide your preparation into four major tasks.
These are:

Plan your MDT imaging strategy. This determines how you build the MDT management
computer.

Install the MDT and the Windows ADK.

Create the deployment share, which is the repository for all deployment files.

Create and customize task sequences. You use task sequences to automate the build and
deployment processes.
After you’ve installed the MDT, start the Deployment Workbench. It’s this interface you use to
configure your MDT environment.
In the Deployment Workbench, configure the Components container first. The Components
container displays the status of the MDT components, and after you install all required
components, you can create your deployment share.
Tip: If you want, you can create multiple deployment shares to support multiple deployment
configurations.
To create a new deployment share, select the properties of the Deployment Shares node, select
Create New Deployment Share, and then complete the steps in the New Deployment Share Wizard.
Demonstration: Prepare a Windows Server 2022 image in
the MDT
For high-level demonstration steps, refer to the Notes, which are in the accompanying Microsoft
PowerPoint presentation.
306
20740 Installation, Storage, and Compute with Windows Server
Lesson 2: Create and manage deployment
images by using the MDT
You can use the MDT to automate the deployment of Windows operating systems. This lesson
describes how to use the MDT to deploy Windows operating systems.
By completing this lesson, you’ll achieve the knowledge and skills to:

Create images in the MDT.

Deploy images in the MDT.
Create images in the MDT
The MDT enables you to build and deploy both boot and install images. Although you can use the
original Install.wim file (from the sources folder of the installation media) to deploy an image, in
most scenarios, you’ll typically want to customize the image.
Using the MDT enables you to deploy a .wim file to a reference computer, configure and add apps
to that reference computer, and then capture the reference computer in its entirety to a .wim file.
You then can deploy that .wim file to multiple computers, adding packages, drivers, and apps to
the image to customize it during deployment.
When you follow the LTI process, you perform the following high-level steps:
1. Install the MDT, create a deployment share, and then import the source files you want to use.
2. Create a task sequence and boot image for the reference computer.
3. Update the deployment share with any changes.
4. Boot your reference computer from the MDT media. This provides access to:
a. The task sequence files.
b. The task sequence.
c. The boot image to the reference computer.
5. Run the deployment wizard to:
a. Install the OS on the reference computer.
b. Capture an image of the reference computer.
6. Copy the captured image to your management computer.
7. Create the boot image and task sequence to deploy the captured image to your target
computers.
8. Update the deployment share.
9. Boot the target computers with the MDT media. This provides the reference computer with
access to the task sequence files, the task sequence, and the boot image.
10. Run the deployment wizard to install the OS on the target computer.
307
20740 Installation, Storage, and Compute with Windows Server
Note: LTI deployment uses only the tools available in the MDT.
Tip: You can deploy the same captured .wim file from the reference computer with different
customizations to meet your organization’s specific needs.
The MDT includes the following common task sequence templates:

Sysprep and Capture. Automates running the System Preparation Tool (Sysprep) and the
capturing of a reference computer.

Standard Client Task Sequence. Creates the default task sequence for deploying OS images to
client computers.

Standard Client Replace Task Sequence. Backs up a client system completely, including the
user state, and then wipes the disk before deploying an OS.

Standard Client Upgrade Task Sequence. Automates the upgrade of a computer.

Litetouch OEM Task Sequence. Preloads OS images on computers in a staging environment
prior to deploying the target computers in the production environment.

Standard Server Task Sequence. Creates the default task sequence for deploying server OS
images.

Standard Server Upgrade Task Sequence. Automates the process of upgrading a server.

Post OS Installation Task Sequence. Performs tasks after you deploy an OS to a target
computer.

Deploy to VHD Client Task Sequence. Deploys an OS to a target client computer’s VHD.

Deploy to VHD Server Task Sequence. Deploys an OS to a VHD on a target server computer.

Custom Task Sequence. Creates a customized task sequence.
Tip: After you create a task sequence, you can then customize each of the tasks in your task
sequence. You also can add new tasks.
You can use two files to control the behavior of installations from a deployment share. These are:

CustomSettings.ini. This file is the primary configuration file for the deployment share. All
installations from the deployment share process this file’s settings.

Bootstrap.ini. Processes before the CustomSettings.ini file.
Both files store rules that control deployment settings, and these files are organized into two
sections:

Priority. Specifies the sections that process during deployment and the order in which to
process them. Defined in both the Bootstrap.ini file and the CustomSettings.ini file.

Properties. Specifies the variables you define or use in the file. Only defined in the
CustomSettings.ini file.
308
20740 Installation, Storage, and Compute with Windows Server
Deploy images in the MDT
When you update your deployment share, the LTI boot media is either created or modified. The LTI
boot media includes the MDT program, which calls the Windows Deployment Wizard during a
deployment. When you start a computer using the LTI boot media, the MDT program starts
automatically, and the following actions occur:
1. The Bootstrap.ini file processes. When the computer starts, the MDT program processes
Bootstrap.ini, and uses the information to connect to the deployment share.
2. After you connect to the deployment share, from the Welcome page, you can:
o
Run the Windows Deployment Wizard to install a new OS.
o
Exit to the command prompt.
o
Run the Windows Recovery Wizard, which starts the Windows Recovery Environment.
Selecting Run the Deployment Wizard causes the following steps to process:
1. The Credentials dialog box displays. You’re prompted to enter credentials if they aren’t defined
in Bootstrap.ini.
2. The CustomSettings.ini file processes. Includes settings for preconfiguring and skipping
Windows Deployment Wizard pages, including skipping the wizard altogether.
3. The Task Sequence page displays. After you apply the CustomSettings.ini file settings, the
Windows Deployment Wizard presents the available task sequences.
After you choose a task sequence, the Windows Deployment Wizard displays the pages that are
relevant for your deployment type and the task-sequence template used.
Important: Settings in the CustomSettings.ini file could prevent certain pages from
appearing.
If you perform a new computer deployment by using a task sequence based on the standard clienttask sequence or a default CustomSettings.ini file, the Windows Deployment Wizard displays the
following pages:
1. Computer Details. Enables you to specify the Computer name, Join a workgroup, or Join a
domain.
2. Move Data and Settings. Enables you to choose to Move the user data and settings to a
specified location. Relevant for upgrades.
3. User Data (Restore). Enables you to specify a location if you previously used the Move the user
data and settings option.
4. Locale and Time. Enables you to select the language and time settings.
5. Ready. Enables you to review all the settings that you have configured.
309
20740 Installation, Storage, and Compute with Windows Server
Advanced Configuration node
The Deployment Workbench provides an Advanced Configuration node, which contains several
items you can use to extend LTI deployment features, including:

Linking deployment shares.

Support for standalone media.

Configuring an MDT database.
Tip: You can use the Monitoring node in the Deployment Workbench to review the
deployment process.
Selection profiles
Using selection profiles enables you to create groups of folders in the Deployment Workbench.
After you create your selection profiles, you can use them in several different locations, including:

The Deployment Share Properties dialog box, on the Windows PE tab, on the Drivers and
Patches tab. Limits the drivers that are added to the Windows PE boot image.

An Inject Drivers task step. Controls drivers that are available for a particular task sequence.

An Apply Patches task step. Controls the update packages that are installed.

The New Media Wizard. Controls the Applications, Operating Systems, Out-of-Box Drivers,
Packages, and Task Sequences folders that deploy with standalone media.

The New Linked Deployment Share Wizard. Controls linked content.
The following list describes the six selection profiles that are created by default:

Everything. Contains all folders from all nodes.

All drivers. Contains all folders from the Out-of-Box Drivers item.

All drivers and package. Contains all folders from the Packages and Out-of-Box Drivers items.

All packages. Contains all folders from the Packages item.

Nothing. Includes no folders or items.

Sample. A sample selection profile that contains folders from the Packages and Task
Sequences items.
Linked deployment shares
You can use linked deployment shares to connect two deployment shares. This enables you to use
LTI deployments in larger organizations, while keeping the management simple by requiring that
you update only the source deployment share. Note that:

One deployment share acts as the source.

The other deployment share is the target.
310
20740 Installation, Storage, and Compute with Windows Server
You use a selection profile to control the content copied to the target deployment share.
Database
By default, any variables you use with your task sequences are stored in the CustomSettings.ini
file. However, as your deployments get more complex, and need to support larger environments,
using a text file becomes less effective. In these situations, you can create a SQL Server database
to store the conditions that you want to define.
Tip: After you create a database, run the Configure DB Wizard to configure the
CustomSettings.ini file to use the MDT database.
Monitor MDT deployments
The process for enabling monitoring is different for LTI deployments and Configuration Managerbased deployments. To configure monitoring for LTI deployments, you must enable it in the
Deployment Share Properties dialog box. These changes:

Install the MDT Monitor service, which:
o

o
Receives and stores the events from the computers being monitored.
Provides the information to the Deployment Workbench.
Install a SQL compact database. Only the MDT Monitor service uses this database.
Important: Monitoring isn’t configured by default.
Lesson 3: VM environments for different
workloads
Before you start deploying virtualized workloads, it’s important that you assess your organization’s
current compute environment. You can use several solution accelerators, such as the Microsoft
Assessment and Planning Toolkit (MAP), to help assess your existing workloads for the suitability of
virtualization.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe the Windows Server image deployment process.

Create and manage deployment images by using the MDT.

Describe the different workloads in the VM environment.
311
20740 Installation, Storage, and Compute with Windows Server
Evaluation factors
Server virtualization can be an effective way to resolve known issues relating to traditional
computer and application environments. The first step towards workload virtualization is to begin
by planning. Evaluating factors that contribute to a successful virtualization project is an important
first step. Factors you should consider include:

Project scope. Define your virtualization project scope early. You should:
o
Determine the business factors driving the project.
o
Determine how you will measure success.
o

Identify staff responsible for determining these factors.
Resource and performance. Assess the resources and performance of your servers that are
virtualization candidates. Typically, VMs require about the same resources as a physical server.
Tip: Use MAP to help provide detailed information on the number of hosts and their
hardware requirements.
Hardware isn’t the only consideration when planning to implement a server virtualization solution.
Ensure you review all aspects of a service’s or app’s requirements. There are several factors you
should consider when determining whether to virtualize server workloads, including:

Compatibility. Determine whether the application can run in a virtualization environment.
Business applications range from simple programs to complex, distributed multiple-tier
applications. You need to consider requirements for specific components of distributed
applications, such as specific needs for communication with other infrastructure components,
or requirements for direct access to the system hardware. While you can virtualize some
servers easily, other components might need to continue running on dedicated hardware.

Apps and services. Verify if apps and services might have specific hardware or device driver
requirements. These apps and services are not well suited for virtualization. For example, if an
app requires direct access to the host computer hardware, that’s not usually possible through
a virtualization interface.

Supportability. Evaluate if a virtualized environment can support your OS and apps. Ensure that
all vendors provide support for virtualization of their OS or apps.

Licensing. Ensure that you can license the app for use in a virtual environment.

Availability requirements. Consider whether your app has high availability options, whether a
VM environment supports those options, and whether you can use failover clustering to make
the VM highly available.
The goal in most organizations is to utilize all servers adequately, regardless of whether they’re
physical or virtual.
312
20740 Installation, Storage, and Compute with Windows Server
Overview of virtualization accelerators
Microsoft provides two virtualization accelerators: MAP and the Infrastructure Planning and Design
guides. This topic examines these accelerators.
MAP
You can use MAP to perform a network-wide, deployment-readiness assessment, which in turn will
help you decide whether you can migrate Microsoft technologies such as servers, desktops, and
apps to a virtual environment.
Using MAP, you can determine which servers you can upgrade to Windows Server 2022, which
servers you can migrate to VMs running on Hyper-V in Windows Server, and which client computers
you can upgrade to Windows 10/11.
MAP provides the following key functions:

Hardware inventory. MAP uses a secure agentless process to collect and organize system
resources and device information across your network from a single management computer.
Returned information includes OS information, memory details, installed drivers, and installed
apps. This data is saved to a local database.

MAP connects to your computers to gather inventory using built-in technologies. These include
Windows Management Instrumentation (WMI), the Remote Registry service, Simple Network
Management Protocol (SNMP), Active Directory, and the Computer Browser service.

Data analysis. MAP performs a detailed analysis of hardware and device compatibility for
migration to various Microsoft operating systems and to Office 365.

Readiness reporting. MAP generates reports containing both summary and detailed
assessment results for each migration scenario and provides the results in Microsoft Excel and
Microsoft Word documents. Readiness reports are available for several technologies, including
Windows 10.

MAP helps to gather performance metrics and generates server consolidation
recommendations. These recommendations identify candidates for server virtualization and
make suggestions for how you might place the physical servers in a virtualized environment.
Infrastructure Planning and Design guides
The Infrastructure Planning and Design guides are free, and they describe architectural
considerations and streamline design processes for planning Microsoft infrastructure
technologies.
Each of these guides address a unique infrastructure technology or scenario, including:

Server virtualization.

App virtualization.

Remote Desktop Services implementations.
313
20740 Installation, Storage, and Compute with Windows Server
Assessment features of the MAP toolkit
Microsoft provides MAP for several planning scenarios, including server-virtualization planning.
MAP is easy to install and guides you through evaluations by making use of built-in wizards,
configurations, and reports. The following section summarizes MAP features that you can use
for server-virtualization assessments.
MAP discovery
MAP can discover Windows, Linux/UNIX, and VMware computers, active devices and users,
Exchange Servers, SQL Servers, Oracle servers, and much more, as displayed in Figure 2:
Figure 31: Inventory Scenarios in MAP
It has the following discovery methods and requirements for creating an inventory:

AD DS. Requires domain credentials. Use this method to discover all computers in all domains
or in specified domains, containers, and organization units.

Windows networking protocols. Uses the WIN32 LAN Manager application programming
interface (API) and requires the Computer Browser service to be running on the computer,
or on the server running MAP.

System Center Configuration Manager. Uses Configuration Manager for discovery. For
discovery, you require the primary site server name and appropriate credentials for
Configuration Manager.
314
20740 Installation, Storage, and Compute with Windows Server

IP Address Range. Scans for computers and servers using one or more IP address ranges, up
to a maximum of 100,000 addresses.

Import names from a file. Enables you to import computer names from a text file.
MAP performance metrics
After you have an inventory of discovered hardware, you can collect performance metrics for your
assessment. To gather performance metrics, run the Performance Metrics Wizard. You can collect
metrics for both Windows- and Linux-based machines using either WMI or Secure Shell.
You’re prompted to schedule an end date and time, as Figure 3 displays, for when the collection
should stop. While the performance metric data collection is running, you might not be able to
perform other tasks with MAP.
Figure 32: Configuring performance assessments
Note: The minimum collection period is 30 minutes.
MAP hardware configuration
MAP hardware configuration provides you with details for the proposed hardware that you should
use for your virtualization host servers. When you run the Hardware Library Wizard, you can enter
the resources such as the number and type of processors, amount of RAM, and storage capacity,
as Figure 4 displays. After you’ve selected these hardware parameters, you can then determine
the number of host servers you might require. If necessary, you also can create a configuration for
315
20740 Installation, Storage, and Compute with Windows Server
shared storage and network configurations, which will help ensure that you plan clusters and
share components correctly.
Figure 33: Hardware Library Wizard
MAP server consolidation
The MAP Server Virtualization and Consolidation Wizard, which Figure 5 displays, provides
planning guidance for Hyper-V. To use the wizard, you must first:

Complete an inventory.

Gather performance metrics.

Input the hardware configuration.
When you run the wizard, you can select a utilization ceiling on the proposed hardware, which
allows for periodic spikes in utilization. The utilization settings are:

Processor.

Memory.

Storage capacity.

Storage IOPS.

Network throughput.
316
20740 Installation, Storage, and Compute with Windows Server
After completing the wizard, MAP provides you with the recommended number of hosts.
Figure 34: Utilization settings
Use the MAP Private Cloud Fast Track Wizard
The MAP Private Cloud Fast Track Wizard provides you with guidance based upon a program that
is a joint effort between Microsoft and its hardware partners. The goal of the program is to help
organizations decrease the time, complexity, and risk of implementing private clouds.
Demonstration: Assess the computing environment by
using the MAP toolkit
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
317
20740 Installation, Storage, and Compute with Windows Server
Design a solution for server virtualization
Most organizations that adopt server virtualization develop a server implementation policy to
virtualize all new and replaced systems. As a general guideline, your virtualization project should
include the following steps:
1. Determine the virtualization scope. To ensure that your project is successful, you must define
scope, milestones, and goals.
2. Determine workloads. Create a list of the potential workloads you’d like to virtualize, list the
workloads you can’t virtualize, then use MAP to discover and inventory all the remaining
servers.
3. Determine the backup and fault-tolerance requirements for each workload. Use these
requirements when designing the virtual server deployment.
4. Use MAP to aid in the design of the virtualization hosts. Consider using the hardware
configurations and the MAP Server Virtualization and Consolidation Wizard to assist in the
design of your host server infrastructure.
5. Map workloads to hosts. Map the VMs to the host servers. There are several factors that you
need to consider during this phase, including:
o
How many VMs can you run on a host?
o
What are the VM performance characteristics and resource utilization?
o
How much of a resource buffer do you need to implement on each host?
6. Design host backup and fault tolerance. Use the information that you collected on the backup
and fault-tolerance requirements for the VMs to design a backup and high availability solution
for your hosts.
7. Determine storage requirements. Ensure that you have space for both the OS VHDs and the
data associated with each VM.
8. Determine network requirements. Plan your network. When planning your network, consider
the following factors:
o
What type of network access do the VMs require?
o
What are the network reliability requirements for each VM?
o
How much bandwidth does each VM need?
o
Will you use network virtualization?
Lab 13: Use the MDT to deploy Windows
Server
Please refer to our online lab to supplement your learning experience with exercises.
318
20740 Installation, Storage, and Compute with Windows Server
Knowledge check
Check your knowledge by answering these questions.
1. Which type of image is used to start a bare-metal computer?
2. What Windows PowerShell cmdlet is used to commit a change made to a mounted .wim
image?
3. In the MDT, what’s usually the first step in your deployment process?
Note: To find the answers, refer to the Knowledge check slides in the accompanying
PowerPoint presentation.
319
20740 Installation, Storage, and Compute with Windows Server
Module 11: Maintain and monitor
Windows Server installations
The Windows Server Update Services (WSUS) is a program that you can implement to help manage
updates and other Microsoft releases in your organization. This module provides details about
WSUS, including how it works, the requirements to implement it, and how you can use it to manage
your organization’s update process. Additionally, this module provides an overview of Windows
PowerShell Desired State Configuration (DSC) and Windows Server monitoring tools and then
explains how to use Performance Monitor and manage event logs.
After completing this module, you should be able to:

Describe what you’d use WSUS for and the implementation requirements for it.

Use WSUS to manage the update process.

Explain Windows PowerShell DSC and its purpose and benefits.

Describe the monitoring tools available in Windows Server.

Use Performance Monitor.

Manage event logs.
Lesson 1: WSUS overview and deployment
options
WSUS is installed as a server role. In this lesson, you’ll learn about the deployment options for
WSUS, the update management process, and how to configure a WSUS server and clients.
By completing this lesson, you’ll achieve the knowledge and skills to describe:

The WSUS server and its deployment options.

WSUS update management.

How to configure the WSUS server and WSUS clients.
What is WSUS?
This server role provides granular control over when updates are deployed to Windows clients and
servers. The WSUS server downloads updates and then makes them available only when you
choose. Using this functionality, you can test updates on a subset of computers before deploying
them to all of your clients and servers.
320
20740 Installation, Storage, and Compute with Windows Server
When you configure WSUS, you identify which types of updates to download and for which
operating systems. Only select updates that are applicable to your organization because the
downloaded updates are stored on the WSUS server and can use significant storage space.
When WSUS downloads updates, they’re made available only to computers that you approve them
for. You can automatically approve updates for all computers, but that provides limited value over
having computers download the updates directly. In most cases, you organize the computers into
at least one group for testing and all remaining computers. This allows you to approve and monitor
an update for the test group. Then if there are no issues in your environment, you can deploy it to
all the remaining computers.
WSUS also includes reporting about update deployment. You can use the management console
to identify computers that haven’t applied recently approved updates. This gives you a list of
computers for troubleshooting.
WSUS server deployment options
The simplest deployment of WSUS has a single server that provides updates to all computers.
This server downloads the updates, and you have a single management console for approving the
updates. However, in larger organizations, it might not be appropriate for computers to use Wide
Area Network (WAN) links to download updates from a central server.
An organization can have multiple WSUS servers serving independent geographic areas. If you
have multiple WSUS servers, you need to ensure that computers are configured to communicate
with the WSUS server in their location. You’ll need to configure multiple GPOs for this.
If you have multiple independent WSUS servers, all management and reporting is also
independent. There’s no centralized approval of updates. Reporting includes only computers
registered on that WSUS server.
You can configure WSUS servers in an integrated hierarchy with centralized management, called
replica mode. In this configuration, one upstream WSUS server downloads updates, and
downstream servers obtain updates from the upstream server. Approvals and management
also occur on the upstream server.
WSUS servers can also be configured in a hierarchy with independent management, called
autonomous mode. In this configuration, one upstream WSUS server downloads updates, and
downstream servers obtain updates from the upstream server. However, approvals occur on
individual servers.
In scenarios with isolated network connectivity, you can use a disconnected WSUS server to deploy
updates. Updates are distributed to a disconnected WSUS server by copying them from another
WSUS server with internet connectivity, and then placing them on portable media.
321
20740 Installation, Storage, and Compute with Windows Server
The WSUS update-management process
The update-management process enables you to manage WSUS and the updates it retrieves.
During this process, you can review and reconfigure your WSUS deployment to your organization’s
changing needs. There are four phases in this process, including:

Assess. Set up your production environment to support update management. The phase is
ongoing and helps you determine the most efficient update topology and scaling for your WSUS
environment, even as your needs change.

Identify. Pinpoint newly available updates and decide whether they’re relevant within your
organization. You can retrieve all updates automatically or only specific types of updates.
WSUS can also identify updates that are relevant to registered computers.

Evaluate and plan. Identify new updates that are available and determine whether they’re
relevant, and then determine if they work correctly within your infrastructure.
Tip: Before you deploy updates to your entire organization, you can push updates to test
computer groups. If all goes well, you can deploy those updates to your organization.

Deploy. Test and verify updates, and then approve them for deployment in your production
network.
Server requirements for WSUS
You can use Server Manager to install and configure the WSUS server role. However, to implement
WSUS, your server must meet minimum hardware and software requirements.
WSUS requires the following software:

Windows Server

Internet Information Services (IIS)

Microsoft .NET Framework

Microsoft Report Viewer Redistributable

SQL Server or Windows Internal Database
The minimum hardware requirements for WSUS are broadly similar to those for any Windows
Server, as WSUS doesn’t make any special demands on the operating system (OS) with respect to
hardware. However, you should consider disk space when planning your deployment. A WSUS
server only requires about 10 gigabytes (GB) of disk space, but you’ll need to allocate additional
space for downloaded updates.
322
20740 Installation, Storage, and Compute with Windows Server
Configure clients to use WSUS
To configure computers to use WSUS, you use the Specify intranet Microsoft update service
location setting in a GPO, as Figure 1 depicts. When computers connect to the WSUS server, they
are registered and then they display in the WSUS management interface. Registered computers
get their updates from WSUS instead of Windows Update.
Figure 35: Specify intranet Microsoft update service location Group Policy setting
Lesson 2: Update management process
with WSUS
There are several benefits to deploying updates to Windows Update clients through WSUS. This
lesson explains the specifics of deploying updates with WSUS to client computers.
By completing this lesson, you’ll achieve the knowledge and skills to describe:

WSUS administration.

Computer groups.

How to approve updates.

How to perform WSUS reporting and troubleshooting.
323
20740 Installation, Storage, and Compute with Windows Server
WSUS administration
You can use the WSUS administration console to perform the following tasks:

Manage updates. Typically, your organization’s computers apply updates according to Group
Policy settings. However, you might want to force updates outside of your usual schedule. In
this situation, you can use the wuauclt.exe command-line tool to control the auto-update
behavior on your client computers.

Configure computer groups. Computer groups are a way to organize the computers to which a
WSUS server deploys updates.

Configure WSUS settings and options. Enables you to configure your WSUS servers with the
required settings.

Monitoring. Monitoring is an essential part of maintaining a service. WSUS logs detailed health
information to the event log, from which you can:
o

Review computer status.
o
Review synchronization information.
o
Update Reports. Review update status.
o
Synchronization Reports. Review the results of the last synchronization.
Configure and review WSUS reports. Enables you to review more detailed information about
the status of updates on your computers and information about downstream servers. The
following reports are available:
o
Computer Reports. Review computer status.
What are computer groups?
You can create computer groups to manage updates in your WSUS environment, although there
also are two default computer groups: All Computers and Unassigned Computers. When a new
computer contacts the WSUS server, it assigns the new computer to both these default groups.
You can create additional computer groups. Usually, computers that you add to a custom group
will share common characteristics, such as configuration, location, or department. You might also
implement a group for update testing, enabling you to approve updates to a small test group
before approving for wider deployment throughout your organization.
There are two ways to assign computers to a custom group:


Server-side targeting. Requires that you manually assign computers to a custom group.
Enables you to manage your WSUS computer group membership manually. This is helpful:
o
If your Active Directory Domain Services (AD DS) structure doesn’t support the logical
client-side for computer groups.
o
When you need to move computers between groups for testing or other purposes.
Client-side targeting. Uses Group Policy to assign computers to a custom group. This method is
used most commonly in large organizations where automated assignment is required and
computers must be assigned to specific groups.
324
20740 Installation, Storage, and Compute with Windows Server
Tip: To use client-side targeting, you must configure a registry key or create a Group Policy
Object (GPO) for the computer that specifies the custom computer group to be joined
during initial registration with the WSUS server.
Approve updates
Approving updates is the process that enables you to control which updates are applied within your
organization. The default configuration for WSUS doesn’t approve updates automatically.
Tip: Although you can enable automatic update approval, we don’t recommend it.
A recommended best practice for approving updates is to:
1. Test the updates in a lab environment.
2. Test the updates in a pilot group.
3. Update the production environment.
This approach helps reduce the risk of an update causing an unexpected problem in your
production environment.
Tip: Perform this process by approving updates for a specific group of computers and then
approving the update for the All Computers group.
Decline updates
You can decline any updates, such as those that might not be critical or don’t have security
implications. After you decline an update, WSUS removes it from the list of updates on the WSUS
server in the default view.
Remove updates
If you experience a problem with an update after it’s been approved and applied, you can use
WSUS to remove the update.
Important: Updates must support removal, which most, although not all, do.
Superseded updates
When you review details about updates, you might notice that one update supersedes another,
which means that the earlier update is no longer needed. Despite this, it’s important to know that
superseded updates aren’t automatically declined because even a superseded update is required
in some circumstances.
325
20740 Installation, Storage, and Compute with Windows Server
Configure automatic updates
If you decide to use automatic updates, you must enable the feature. When enabled, the default
configuration on a WSUS server is to download updates from Microsoft Update and then install
those updates.
After you implement WSUS, you must configure your client computers to obtain updates
automatically from a specified WSUS server. Typically, you do this by using Group Policy:
1. Under Computer Configuration, expand Policies, expand Administrative Templates, expand
Windows Components, and then locate the Windows Updates node.
2. In Windows 11, expand the Manage updates offered from Windows Server Update Service
node.
3. Then, enable and configure the Specify intranet Microsoft update service location value. Enter
the required WSUS server’s URL.
In addition to configuring the source for updates, which Figure 2 displays, you can also use a GPO
to configure the following settings:

Update frequency.

Enable client-side targeting.

Source services for specific update classes.
Figure 36: Specify intranet Microsoft update service location Group Policy setting
Tip: You can configure other update settings in the Legacy Policies, Manage end user
experience, and Manage updates offered from Windows Update nodes.
326
20740 Installation, Storage, and Compute with Windows Server
Demonstration: Deploy updates by using WSUS
For high-level demonstration steps, refer to the Notes, which are in the accompanying Microsoft
PowerPoint presentation.
WSUS reporting
WSUS provides a collection of reports that you can use to manage your WSUS environment.
They’re divided into three categories:

Update Reports. Displays reports that relate to the updates available in WSUS.
o
Update Status Summary. Displays a summary of update status.
o
Update Detailed Status. Displays details of each update status. Each page depicts a single
update, with a list of computers for that update.
o
Update Tabular Status. Displays a summary of update status in a tabular format.
o

Computer Updates. Displays reports that relate to those computers and computer groups that
WSUS manages.
o
Computer Status Summary. Displays a summary of computer status.
o
Computer Detailed Status. Displays details of each computer’s status. Each page depicts
the updates for a single computer.
o
Computer Tabular Status. Displays a summary of computer status in a tabular format.
o

Update Tabular Status for Approved Updates. Displays a summary of update status for
approved updates in a tabular format.
Computer Tabular Status for Approved Updates. Displays a summary of computer status
for approved updates in a tabular format.
Synchronization Updates. Displays reports that relate to the synchronization of update data.
o
Synchronization Results. Displays the results of the last synchronization.
WSUS troubleshooting
After you configure your WSUS environment, you might occasionally encounter problems with
updates applying when expected. In these situations, use the following guidelines to help
troubleshoot WSUS. The following list describes common problems and possible reasons they’re
occurring:

Computers don’t appear in WSUS. This typically results from a misconfiguration of the client
computer or a configuration problem with a GPO that’s not applied to the client computer.

WSUS server stops with full database. When this happens, you’ll notice a SQL Server dump
file (SQLDumpnnnn.txt) in the LOGS folder for SQL Server. This typically occurs because of
index corruption in the database. You might need assistance from a SQL Server database
administrator (DBA) to recreate indexes. Alternatively, you might need to re-install WSUS to
fix the problem.
327
20740 Installation, Storage, and Compute with Windows Server

You can’t connect to WSUS. Start by verifying network connectivity and also ensure that the
client computer can connect to the ports that WSUS is using.

Other problems. Consider using the server diagnostics tool and the client diagnostics tool
available from Microsoft.
Lesson 3: Overview of PowerShell Desired
State Configuration
You can use Windows PowerShell DSC (Desired State Configuration), a component of the Windows
Management Framework, to manage and maintain systems by using declarative configurations.
By completing this lesson, you’ll achieve the knowledge and skills to describe:

The benefits of Windows PowerShell DSC.

The requirements for Windows PowerShell DSC.

How to implement Windows PowerShell DSC.

How to troubleshoot Windows PowerShell DSC.
Benefits of Windows PowerShell DSC
Windows PowerShell DSC is an extension of Windows PowerShell and the Windows Management
Framework. With Windows PowerShell DSC, you deploy a configuration that instructs Windows
PowerShell what you want to do (a declarative approach), rather than creating a Windows
PowerShell script to execute a sequence of commands (an imperative approach).
Note: By using DSC, you don’t have to worry about including error handling or other logic,
because the underlying automation framework manages that automatically.
When using an imperative approach (Windows PowerShell), the scripts:

Define how tasks must be performed.

Can be hard to read.

Won’t rerun themselves and must be rerun through an administrative action to re-apply
settings, if necessary.

Require custom logic to detect and correct configuration drift.
However, when you use a declarative approach (Windows PowerShell DSC), configurations:

Define what should be done.

Are easier to understand.

Reapply as necessary, at whatever interval you choose.

Use the logic built into DSC resources to detect and correct configuration drift.
328
20740 Installation, Storage, and Compute with Windows Server
When planning to use DSC, it’s important to consider that DSC:

Relies on resources. These are the building blocks used to author configurations. By default,
DSC includes resources that you can use to manage basic components of the Windows OS,
such as services, files, and registry settings.

Can automatically reapply any deployed configurations whenever it detects that the system
has deviated from the desired state. This is very useful, because configurations change
over time.

Is scalable. You can use DSC in a variety of environments, large or small, centralized or
decentralized.

Doesn’t require that computers belong to an AD DS domain. This enables you to configure
domain-joined and workgroup-based computers.

Is standards-based and is built around the Open Management Infrastructure (OMI) model.
Therefore, you can also use it to manage any OS with an OMI-compliant Common Information
Model (CIM) server, such as CentOS, or other varieties of Linux.
Requirements for Windows PowerShell DSC
Authoring and deploying DSC configurations for your organization requires that you perform
multiple steps, including:
1. Enable Windows Remote Management. DSC relies on Windows Remote Management (WinRM),
so you must configure WinRM listeners on the computers that you want to manage. By default,
WinRM is enabled on Windows Server. However, you can enable WinRM on individual
computers with the Set-WSManQuickConfig cmdlet or use Group Policy for domain-joined
computers.
2. Configure the Local Configuration Manager. The Local Configuration Manager (LCM) agent
processes DSC configurations on the computers you manage. Configure the LCM by using a
special Managed Object Format (MOF) file that sets the LCM-specific parameters. You apply
the configuration with the Set-DscLocalConfiguration cmdlet.
3. Install desired modules. The modules developed for DSC are available in the Windows
PowerShell Gallery located at https://www.powershellgallery.com. To install modules from
the Windows PowerShell Gallery, use the PowerShellGet module. For example, to install the
xComputerManagement module, run the following Windows PowerShell command: InstallModule -Name xComputerManagement.
4. Create and compile a basic DSC configuration. After you have met all prerequisites and
installed the desired module(s) on the target servers that you want to configure, you can
begin authoring configuration scripts by using DSC resources. Configuration scripts don’t
actually modify target systems. Configuration scripts are only a template that you use to
compile an MOF file that the LCM agent pushes to or pulls from the target system. You can
author configuration scripts in any Windows PowerShell script or text editor. The configuration
is called, much like a function, to compile the configuration data into MOF files for each
defined node.
329
20740 Installation, Storage, and Compute with Windows Server
5. Deploy the configurations to the desired servers. After you have compiled the configuration
into a .mof file, you push the configuration to the LCM on the target node by using the StartDscConfiguration cmdlet. Running this command invokes the LCM agent to process the
configuration, and if necessary, make changes on the target node. For example, to deploy a
configuration named LON-SRV1.mof, run the following command: Start-DscConfiguration
–Wait –Verbose –Path C:\DSC –ComputerName LON-SRV1.
Implement Windows PowerShell DSC
DSC configurations are Windows PowerShell scripts that define a function. To create a
configuration, use the Windows PowerShell keyword Configuration in a .ps1 file:
Configuration ContosoDscConfiguration {
Node "LON-SVR1" {
WindowsFeature MyFeatureInstance {
Ensure = "Present"
Name =
"RSAT"
}
WindowsFeature My2ndFeatureInstance {
Ensure = "Present"
Name = "Bitlocker"
}
}
}
The preceding example depicts a sample configuration script. A typical configuration script
consists of at least three parts:

The Configuration block. The outermost script block. You define it by using the Configuration
keyword and providing a name. In the preceding example, the name of the configuration is
ContosoDscConfiguration.

One or more Node blocks. These define the nodes (computers or virtual machines [VMs]) that
you’re configuring. In the preceding example configuration, there’s one Node block that targets
a computer named LON-SVR1.

One or more Resource blocks. This is where the configuration sets the properties for the
resources that it’s configuring. In the preceding example, there are two resource blocks, each
of which call the WindowsFeature resource.
Tip: Within a Configuration block, you can do anything that you could do in a Windows
PowerShell function.
330
20740 Installation, Storage, and Compute with Windows Server
Before you can use a configuration, you must compile it into a MOF file. You do this by calling the
configuration the same way that you would call a Windows PowerShell function. For example, to
compile the preceding example, run the following command:
.\ContosoDscConfiguration.ps1.
When you call the configuration, it creates:

A folder in the current folder with the same name as the configuration.

A file named NodeName.mof in the newly created folder, where NodeName is the name of the
target node of the configuration. If more than one node is targeted, a MOF file will be created
for each node.
Troubleshoot Windows PowerShell DSC
There are two steps for troubleshooting Windows PowerShell DSC:
1. Review the available logs.
2. Recycle the DSC cache to clear any scripts stored in memory.
Use Windows PowerShell DSC logs to diagnose script errors.
Review logs
Windows PowerShell DSC records errors and events in logs that you can access by using Event
Viewer. Reviewing these logs helps you identify why a script or operation failed and might suggest
ways to resolve the problem, and even help avoid similar future failures.
Writing configuration scripts is complex, so it can be hard to identify errors. Using the DSC Log
resource can track the progress of your configuration in the DSC Analytic event log.
Tip: In Event Viewer, you can find DSC events in Applications and Services
Logs\Microsoft\Windows\Desired State Configuration.
There are several tools you can use to analyze DSC logs, such as xDscDiagnostics, which is a
Windows PowerShell module consisting of two functions:

Get-xDscOperation. Enables you to locate the results of the DSC operations that run on one or
multiple computers. Returns an object that contains the collection of events produced by each
DSC operation.

Trace-xDscOperation. Returns an object containing a collection of events, their event types,
and the message output generated from a particular DSC operation.
331
20740 Installation, Storage, and Compute with Windows Server
How to reset the cache
The DSC engine caches resources implemented as a Windows PowerShell module, for efficiency.
However, this can cause issues when you’re authoring and testing a resource simultaneously,
because Windows PowerShell DSC loads the cached version until the process restarts. The only
way to make Windows PowerShell DSC load the newer version is to explicitly end the process
hosting the Windows PowerShell DSC engine. To successfully recycle the configuration and clear
the cache without restarting, you must stop and then restart the host process. You can do this on
a per-instance basis, whereby you identify the process, stop it, and restart it.
To identify which process is hosting the DSC engine and stop it on a per-instance basis, you can
list the process ID of the WmiPrvSE that’s hosting the DSC engine. Then, to update the provider,
stop the WmiPrvSE process, and then run Start-DscConfiguration again.
Lesson 4: Overview of Windows Server
monitoring tools
Before you can begin monitoring your servers, it’s important to understand the tools that are
available and how to use them. Windows Server provides many built-in monitoring tools. However,
you can also download and install the Windows Sysinternals tools to help monitor your servers.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe tools in Windows Server for monitoring.

Use Server Manager for monitoring.
Overview of Task Manager
Task Manager provides information about processor, memory, disk, network, and GPU
performance on a computer. Additional tabs can provide significant information about your
servers, including:

Processes. Lists the currently running processes and services, and their corresponding
consumption of CPU, memory, disk, and network. For troubleshooting, you can sort apps by
resource usage, such as the CPU. This would highlight an app or service that’s over-consuming
CPU.

Performance. Provides real-time readings for overall CPU, memory, network, and disk usage.
Information is displayed in text and line-graph format. This tab is helpful for troubleshooting
overall performance issues. For example, if the system seems slow, this tab might reveal that
CPU is running at 100 percent utilization.

Users. Displays which users are connected and what processes each user is running. From
here, you can disconnect users forcibly.
332
20740 Installation, Storage, and Compute with Windows Server

Details. Provides advanced information about a process or service. App developers might find
this information useful, and it provides the user context in which the process is running and
whether User Account Control (UAC) virtualization is enabled. These settings are used primarily
when troubleshooting legacy applications, such as those written for Windows XP. You can also
make advanced adjustments, such as changing a process’ priority level and setting processor
affinity. Changing a process’ priority makes it relatively faster or slower than other processes.
The affinity of a process allows you to specify on which processors (or cores) the process runs.
Don’t change priority or affinity unless you understand the implications, as you could
potentially destabilize the system.

Services. Displays services by name and lists their statuses as either running or stopped. The
Services tab lists which services are running, which can be helpful, as performance issues
sometimes occur when too many unnecessary services are running. Select the blue Open
Services link to open the Services administrative console, in which you can reconfigure
settings for services and disable unnecessary ones.
You can also terminate applications or services in Task Manager, from any tab with the individual
applications or services. This is helpful when you need to identify and disconnect services that
have stopped responding.
Overview of Performance Monitor
Performance Monitor provides real-time performance data, and you then can save that data in log
files for later analysis. By taking regular snapshots of performance counters, you can detect trends
and extrapolate them to predict and plan for potential problems.
Performance Monitor tracks the local computer’s performance, but it can also track a remote
computer’s performance over the network. This can be useful because running Performance
Monitor creates an overhead that can contaminate readings. For example, if you enable logging
in Performance Monitor and track disk usage, you’ll find that the disk usage increases because
logging requires constant writing to disk. In that scenario, you might be better off tracking disk
performance remotely from another computer that’s running Performance Monitor.
Planning for future upgrades and replacements becomes easier if you can predict how usage
will increase. For example, suppose you track a web server over time. You might notice that
as a website it’s hosting becomes more popular, CPU usage rises by 5 percent a month. By
extrapolating that data, you might be able to predict that in five months, CPU usage will obtain
100 percent of capacity. That information allows you to budget and plan for an upgrade or
replacement of the web server before its performance becomes a problem.
Performance Monitor has three live views for tracking current activity: a line graph, a histogram
graph, and a numerical report. You can switch between different views by selecting Ctrl+G. Line
graph is the default view. As its name indicates, it displays real-time information as a line graph.
The histogram graph displays the same information as a bar chart, and the report view displays a
written report of the actual values being read every second.
333
20740 Installation, Storage, and Compute with Windows Server
Objects, counters, and instances
Central to the use of Performance Monitor is the concept of objects and counters. Software and
hardware components are considered objects, and each object supports several counters that can
be measured. If there are multiple instances of an object, you can specify which ones you want to
track, or you can specify the total figures for all instances.
An example of a hardware component object is the PhysicalDisk object, which has counters such
as %Disk Time that track the percentage of time a disk is busy reading or writing. You can also
track individual instances of an object and counter. For example, assuming you have two physical
disks installed. you could track instance 0, instance 1, or _Total. This would track the first disk,
second disk, or the total values for both.
An example of a software component object is TCPv4, which tracks the IPv4 version of TCP/IP
protocol. Its counters include Segments/sec, which tracks how many Transmission Control
Protocol (TCP) segments are being transmitted or received every second. In this case, there isn’t
any choice for an instance, because there’s only one instance of the TCPv4 protocol.
Getting help within Performance Monitor is relatively easy. When you add a counter to track, you
can select the Show description option. This presents the Description box with a brief explanation
of what the countermeasures.
Windows PowerShell
Several PowerShell cmdlets allow you to gather real-time performance information from the local
or remote computers. For example, you can use Get-Counter or Get-WinEvent to retrieve
events from the event logs. You can utilize these and other cmdlets in PowerShell scripts to
automate information gathering from the local or remote computers.
With the Get-Counter cmdlet, you can gather information at a specified interval or from a specific
number of samples, or you can monitor continuously. For example, the following command
displays CPU usage data continuously, sampling at a rate of one time per second:
Get-Counter -Counter “\Processor(_Total)% Processor Time” - Continuous
To sample at a different frequency, use the SampleInterval parameter.
Overview of Resource Monitor
Resource Monitor provides more detailed information than what’s available in Task Manager. It
has a tab for each major component—CPU, Memory, Disk. and Network—with details about each
running process and service. A particularly useful chart on the Memory tab displays graphically
how RAM is being utilized, how much is free, and how much is used for caching. Therefore, a
memory bottleneck should be more easily identifiable because the amount of free memory would
be low.
334
20740 Installation, Storage, and Compute with Windows Server
The TCP connections and Listening ports sections on the Network tab aren’t necessarily related to
performance but are useful from a security perspective. TCP connections displays the computers
to which your computer is currently connected. The Listening ports section can be useful in a
security audit because it displays which incoming connections your computer will accept.
Overview of Reliability Monitor
Reliability Monitor provides a graphical representation of recent problems on your server
computer, as it generates a stability index based on its assessment of your server’s reliability.
This index is between one and 10, where the highest number represents the most reliable server.
You can review reliability data going back several weeks or simply focus on the last few days.
After reviewing a server’s stability, you can review details for specific days, including the number of
application failures, Windows failures, miscellaneous failures, warnings, and information events.
Beneath the Reliability details heading for the selected day, you can select each event that was
recorded by Reliability Monitor and locate additional information. Just select the context menu for
the event, and then select View technical details. Additional information is displayed in a separate
window.
There are also links in the Reliability Monitor window that enable you to:

Save reliability history.

View all problem reports.
Tip: You can access Reliability Monitor from the Control Panel.
Overview of Event Viewer
Windows maintains hundreds of logs that track system activities, applications, and software and
hardware components. For example, one log is dedicated to Group Policy, where you can review
all events related to Group Policy application. Additionally, error messages that appear are often
logged in the appropriate log file, and then you can use Event Viewer to work with the event logs.
Event Viewer allows you to browse logs and review details of each event that’s logged. Additionally,
you can:

Review events from multiple logs by creating a custom view. One preconfigured custom view is
Administrative Events, which combines warning and error messages from several logs.

Trigger alerts when certain events occur. By selecting Attach task to this event from the context
menu of an event, you can trigger a program such as a batch file or PowerShell script to run if
that event occurs again.
335
20740 Installation, Storage, and Compute with Windows Server

Have events forwarded to another computer, known as the collector. Using Event Viewer, you
can have predefined events forwarded to another computer, which is helpful for identifying
issues without having to check each server. For example, you could configure several servers
to forward logon-failure events to a central collector computer. On the collector, you would
need to configure a subscription, which accepts incoming events or which you can configure
to pull those events at specified intervals. Event Viewer has a default log named Forwarded
events, where forwarded messages are stored on the collector computer.
Events
Events logged in Event Viewer are categorized in one of three ways: information, warning, or error
messages. The details of an event that’s logged in Event Viewer can include the following
information:

A description of the message. Sometimes, a verbose description of the event displays.
However, the description could also be short and cryptic, and not particularly helpful.

An Event ID number. If the event’s description isn’t very clear, it’s worth searching the internet
for the applicable event ID number. You might find helpful information about what the
message means.

Date and time of the event. Event Viewer displays the date and time an event occurred.

The name of a user account. If a specific user account was involved, Event Viewer includes this
information.

The name of a computer. If a specific computer account was involved, Event Viewer includes
this information.

An online link that sends the event details. Event Viewer will try to find online information
about the event. If it does, it’ll include an online link, enabling you to find out more information
about the error.
The logs in Event Viewer
Event Viewer has five major logs that contain useful error information:

Application log. Contains error or information messages that applications log.

Security log. Displays security events only if you have set up auditing on the computer. For
example, you can set up auditing to track failed and successful sign-in events.

Setup log. Contains error or information messages related to the Windows component
installation or activities.

System log. Contains error or information messages related to the Windows OS.

Forwarded events. Contains events forwarded from other computers to this computer, which is
acting as the collector.
336
20740 Installation, Storage, and Compute with Windows Server
Note: There are many other logs you can find in Event Viewer. It’s always worth reviewing
them to determine if the particular feature you’re having problems with has its own log. For
example, Event Viewer also has logs for hardware events, Group Policy, and Applocker.
Monitor a server with Server Manager
If you have relatively few server computers, you can use Server Manager to monitor those servers.
This becomes less practical in enterprise organizations with hundreds of servers.
Note: You can monitor only up to 100 servers in Server Manager.
This is especially true of hybrid environments, where your servers might be hosted in local
datacenters and also in a cloud-provider’s datacenter.
Tip: Windows Server Manager is installed by default for Windows Server with Desktop
Experience.
If you use Server Manager to monitor your on-premises server environment, you can perform the
following tasks:

Add remote servers to a pool of servers that you can monitor.

Monitor both Server Core and Desktop Experience versions of Windows Server.

Group your servers, such as by city or department, to make it easier to assign a specific
administrator for monitoring a group of servers.

Launch tools on remote servers to perform specific tasks.

Monitor critical events to ensure server availability.
Lesson 5: Use Performance Monitor
It’s usually the case that Windows Server computers are over-specified, which means they often
have more resources, such as memory or processor capacity, than they really need. This is likely
because IT administrators worry about underspecifying servers. However, despite this, it’s still
worth monitoring your servers’ performance to ensure adequate throughput.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe baseline, trends, and capacity planning.

Describe data collector sets.

Describe how to monitor VMs.
337
20740 Installation, Storage, and Compute with Windows Server
Overview of baseline, trends, and capacity planning
A bottleneck is the hardware or software component that impacts a computer’s performance. By
finding and eliminating the bottleneck, you can help improve performance. In a sense, there will
always be a bottleneck—simply defined as the slowest component.
However, fixing one bottleneck could expose another component as a bottleneck. This is invariably
the case because unless all components are perfectly matched, some component will be the
slowest in any given configuration.
The goal isn’t to chase bottlenecks endlessly, but rather to understand when performance is
acceptable and meeting expectations. For example, you might determine that memory is the
bottleneck. After installing and configuring extra memory, you discover that the disk has now
become the slowest component. So, should you now upgrade the disk subsystem? That decision
would be based on several factors, such as whether the server is now operating at the expected
performance level and whether performance improvements from further upgrades will be
marginal. If so, you might decide that the cost and disruption of replacing the disk subsystem
isn’t justified.
Potential hardware bottlenecks
Four major hardware components could become bottlenecks in a server, including:

Processor. Server processors (or CPUs) coordinate all other components, such as the physical
disk and network interface, and data processing. Application servers run powerful, heavy-duty
applications that depend on good CPU performance. Modern computers support symmetric
multiprocessing (SMP), which can utilize multiple physical CPUs on the motherboard. Even if
the motherboard doesn’t support multiple CPUs, all modern computers support multi-core
CPUs. Another feature of CPUs that can impact performance is the amount of on-board cache
built into the CPUs. Generally, the larger the caches, the better the performance.

The disk subsystem. Includes the types of controllers together with the physical disks. A disk
subsystem can impact performance if the server’s role requires it to frequently read and
write to the physical disk. An obvious example is a file server, but less obvious could be busy
database servers. Because databases are often updated in real time, and data is retrieved
concurrently from the disk, this would involve a lot of disk input/output (I/O). There are several
ways of improving disk subsystem performance. Moving from mechanical hard drives to solidstate drives (SSDs) often results in dramatic performance improvements. You can also
evaluate using better controller cards with onboard caching or upgrading the technology,
such as replacing Integrated Drive Electronics (IDE) cards and disks with Serial Advanced
Technology Attachment (SATA) controllers and disks.

Memory. Memory must be large enough to store the applications you want to run concurrently
and the data that those applications require. For example, applications that manipulate large
images or videos would benefit from enough random access memory (RAM) to load an entire
image or video into RAM while they’re being edited. RAM used to be an expensive component,
and the motherboard or system BIOS limited how much you could install. The Windows OS has,
therefore, always supported virtual memory.
338
20740 Installation, Storage, and Compute with Windows Server
Virtual memory simulates RAM by rapidly moving data in and out of RAM to the disk. If a
system doesn’t have enough RAM, a process called paging temporarily moves data out of
memory and saves it to the disk. Then, when that data is required, it’s paged back from the
disk into RAM. Older mechanical hard disks were many times slower than RAM, and excessive
paging could severely impact performance. The solution is to provide adequate RAM to
minimize the need for paging. Paging is somewhat less important since the advent of SSDs,
which are significantly faster than mechanical hard drives, but still slower than RAM.

Network interface. Network subsystem refers to the number and type of network interface
cards (NICs). Finding network bottlenecks is inherently more difficult than with CPU, memory,
or disk, because network performance can depend on a variety of external factors, such
as how busy the network is, or the speed of switches, routers, firewalls, or cabling. To
troubleshoot, you might want to track performance on several computers connected to the
same switch. If they all have below-par performance, it might be that the switch is the problem,
not the network interface. However, if you determine that the bottleneck is indeed within the
server, there could be several options to improve performance, such as installing multiple NICs
and utilizing the NIC Teaming feature in Windows Server or upgrading to faster NICs or NICs
with large on-board caching.
The server’s role determines which of these components is more important to monitor and
upgrade. For example, disk and network interfaces are the more obvious candidates for file-server
performance monitoring. A file server often fetches files from the disk and transmits them to
clients over the network. Surprisingly, additional memory could help the file server cache
frequently requested files and as such, could also be a candidate for an upgrade.
Mitigation in the case of hardware bottlenecks usually involves upgrading hardware. However, this
might not be possible in some cases. For example, upgrading the CPU to a faster one or adding
more RAM might not be supported by the motherboard or BIOS in the server. In that scenario, you
could be faced with replacing the entire server.
Establish a baseline
It’s important that you understand the specific components that you need to monitor, and how to
interpret data you gather and determine a course of action based on that interpretation. A good
place to start is to create a baseline.
Data collector sets are often used to create a baseline. A baseline measures the performance of
the various components and stores them for later comparison. You should create a baseline early
in the deployment phase and again when the server is experiencing a normal workload. Later, if
performance issues arise, you can measure performance counters again and compare them to
the baseline. For example, assume a server had an initial baseline CPU usage of 50 percent, as
measured by the Processor object’s % Processor time counter. If the server begins experiencing
problems, you could measure that counter again and compare it to the original baseline. Using this
information, you can begin troubleshooting. If the processor use is significantly greater, that could
indicate that the processor is the problem.
339
20740 Installation, Storage, and Compute with Windows Server
Your baseline should include data for a number of resources, including four key components:

Processor

Memory

Disk

Network
Processor
Processor and System objects allow you to track CPU performance within a computer. Useful
counters to track are:

Processor > % Processor time. Microsoft describes this as the percentage of time the CPU is
busy with nonidle threads. Effectively, this means it measures what percentage of the total
CPU capacity is being utilized. A reading of 100 percent for this counter indicates that there’s
no spare capacity and the CPU might be a bottleneck. With 100 percent processor time,
requests to the CPU might have to be queued. In fact, Microsoft suggests that any reading
more than 85 percent indicates that you must take remedial action, as at 85 percent
utilization, there will be occasional peaks of activity that will push it to 100 percent.

Processor > Interrupts/sec. Interrupts are generated when a hardware device needs the
services of the CPU. For example, your disk subsystem might generate interrupts when it needs
to read the next set of blocks from the disk. This interrupts CPU activities so it can suspend
what it’s currently doing and manage the request.
If you create a baseline when the server is operating with acceptable performance, you
can compare the Interrupts/sec you’re currently getting with the baseline. High values for
interrupts/sec could indicate that the network or disk subsystems are inefficient. For example,
an efficient disk subsystem can read more from the disk in one operation than a less efficient
one can. An inefficient subsystem generates more interrupts than a more efficient subsystem
when reading the same amount of data. Also note that a sudden rise in interrupts/sec could
indicate a faulty device that’s issuing spurious interrupts.

System > Processor Queue Length. This counter measures the number of outstanding requests
to the CPU(s). Ideally, the queue length should be 0, which indicates that all requests are being
processed immediately and no queues are building. Microsoft suggests that under normal
operations, the value shouldn’t be continuously more than four.
Memory
Insufficient RAM in a system can severely impact performance. As mentioned earlier, when
Windows doesn’t have enough RAM, it starts to page data in and out of disk. The use of disk
space to simulate RAM is referred to as virtual memory. Disks are many times slower than RAM.
Furthermore, the extra disk activity generated can slow disk performance and impact CPU usage
because more interrupts are being generated.
340
20740 Installation, Storage, and Compute with Windows Server
Two useful counters for identifying memory shortage are:

Memory > Pages/sec. Measures how many times per second the CPU must move data out of
RAM to the disk, and how many times it must retrieve data from the disk per second. It’s best
to compare this value to other similar servers. If the number is greater on one server,
investigate if it’s short of RAM.

Memory > committed bytes. Measures the total amount of physical and virtual memory
committed to all running apps, processes, and services. To determine how much RAM a system
needs, load all applications you want to run concurrently and then measure the committed
bytes value. If committed bytes exceed the amount of the system’s physical RAM, virtual
memory is being used to compensate for the shortfall.
Microsoft recommends that committed bytes should always be less than the amount of the
system’s RAM. Ideally, it should be no more than 75 percent of RAM. For example, if you
have 4 gigabytes (GB) of RAM, committed bytes should remain below 3 GB. If the value is
consistently higher, you need to add more RAM or reduce the server workload.
Disk
The PhysicalDisk object also has several useful counters, similar to the counters for CPU,
including:

PhysicalDisk > % disk time. Measures the percentage of time the disk subsystem is busy
reading or writing. Microsoft recommends this value should be below 50 percent.

PhysicalDisk > Avg. Disk Queue Length. Tracks whether any queues are building because the
disk subsystem can’t keep up with the requests being made to it. Ideally, this value should be
below four.
If you determine the disk subsystem is the bottleneck in a system, there are several solutions,
including:

Reduce the workload. Identify the disk-intensive applications and consider moving them to
another server.

Upgrade the disk controller and physical disk to a faster system.

Replace hard drives with SSDs. If you have mechanical hard drives, consider replacing them
with SSDs.

Implement hardware or software RAID. Software RAID is a feature available in Windows Server.
Hardware RAID, with dedicated hardware, provides a more robust and better-performing
system, but at a greater cost. Redundant Array of Independent Disks (RAID) utilizes multiple
physical drives to improve performance. You can implement RAID 1 (also known as disk
striping) and RAID 5 (also known as striping with parity) in both software and hardware. Both
work by striping data across multiple physical disks. RAID improves performance and provides
fault tolerance, and it continues to function even if a hard disk in the array fails.
To understand RAID, consider a RAID1 configuration with two physical disks. When you save
a file, half the file is written to one disk and the other half to the second disk. This occurs
concurrently, so it takes only half the time to write the file as it would if you didn’t implement
RAID. Similar benefits are achieved when you read the file from disk.
341
20740 Installation, Storage, and Compute with Windows Server
Network
You can use counters similar to the PhysicalDisk and Processor counters to identify network
interface bottlenecks:

Network Interface > Bytes Total/sec. Tracks how busy the network interface is by displaying
the total number of bytes transmitted and received through the NIC. A greater value indicates
a more efficient network subsystem.

Network Interface > output queue length. Indicates whether requests to transmit data are
being queued. However, before deciding that the network interface is the bottleneck, verify
that the problem isn’t actually because the network is too slow. To test this, track several
computers connected to the same switch. If they all display a large queue length, the problem
is likely with the switch, cabling, or other external factors.

Network Interface > Current bandwidth. Tracks the nominal network bandwidth. Compare this
reading against the expected bandwidth. For example, most networks these days operate at
1 gigabit per second (Gbps). If the computer being investigated displays the bandwidth as
100 megabits per second (Mbps), that likely indicates a mismatch between the switch and
the computer’s NIC. Either the NIC might not support a gigabit network, or it might have been
misconfigured to operate at a lower bandwidth. The problem could also be the configuration of
the switch that the server is connected to.
If you determine that the NIC is a bottleneck, you can upgrade to a faster NIC (if one is available).
You can also install multiple NICs and then team them. NIC teaming increases throughput by
sending and receiving data simultaneously through multiple NICs. It also provides fault tolerance,
because if one of the NICs fails the team continues to function. Windows Server supports NIC
teaming.
What are data collector sets?
Data collector sets enable you to create a baseline and then gather data for current performance
throughout so you can compare them.
The Performance Monitor live-view charts are useful in tracking real-time impact of activities. For
example, you could add the suggested counters for the processor, memory, physical disk, and
network interface objects to a chart and then monitor them as you launch an application or
perform a task. This information could help you determine the real-time impact on the server when
that application is launched or when a task is completed.
However, in most instances, you’ll want to capture data over longer time periods. These instances
are when Performance Monitor data collector sets are more useful, as you can capture data and
store it in a log file for later analysis or to create a baseline.
342
20740 Installation, Storage, and Compute with Windows Server
In data collector sets, you can gather data from:

Performance counters. These are the same objects and counters as in the real-time view.

Event trace data. This provides detailed information about a particular Windows component.
You can choose a variety of providers, such as Active Directory: Kerberos client. The trace logs
created when you track event trace data are useful in advanced troubleshooting

System configuration information. This collector captures the current registry settings.
There are two preconfigured data collector sets:


System Diagnostics. This set generates a report detailing:
o
The status of local hardware resources.
o
Processes.
o
Configuration data.
o
System response times.
o
System information.
o
Suggestions to maximize performance and streamline system operations.
o
Local hardware resources.
System Performance. This set generates a report detailing the status of:
o
System response times.
o
Information to identify possible causes of performance issues.
o
System processes.
Note: On domain controllers (DCs), there’s also an Active Directory Diagnostics data
collector set.
Demonstration: Review performance with Performance
Monitor
For high-level demonstration steps, refer to the Notes, which are in the accompanying Microsoft
PowerPoint presentation.
Monitor network infrastructure services
Network infrastructure server roles, such as DNS and DHCP, provide critical services on your
organization’s network infrastructure. If these services are unavailable, or not optimal, it can
have a significant impact. You should therefore consider monitoring these server roles.
343
20740 Installation, Storage, and Compute with Windows Server
Start by capturing relevant Performance Monitor objects and counters into a baseline as described
earlier. Then, periodically, revisit the Data Collector Sets used to create the baseline, and measure
the current statistics. You then can determine relative performance. It’s also worth considering
that a service’s degradation in throughput, such as DNS, might be indicative of underlying
problems that you should investigate further.
Monitor DNS
Name resolution using DNS is a foundation network service, without which, much of your network
infrastructure would be impacted. Use the following guidance to monitor DNS:

Measure and review general DNS server statistics. Be sure to include the number of queries
and responses that DNS servers are handling.

Review both TCP and User Datagram Protocol (UDP) counters, as DNS uses both these
transport layer protocols for handling name-resolution queries and responses.

Review dynamic updates to determine workload generated by client computers that are
updating their IP configuration with the DNS server.

Track memory usage and memory allocation arising from the DNS server role.

Review counters for recursive lookups. These arise when the DNS server isn’t authoritative for
a petitioned record and needs to query other DNS servers.

Review zone transfer traffic. Remember that zone transfers are managed by Active Directory
replication when you have AD-integrated zones.
Tip: You can accomplish much of the basic DNS monitoring by using the DNS Manager
console on a server computer with the DNS role installed.
Monitor DHCP
DHCP allocates IP configurations to your infrastructure’s computers. To verify the performance of
the DHCP server role, use the following guidance:

Review the Average Queue Length counter in Performance Monitor. Indicates the number of
unprocessed messages awaiting action by the DHCP role. If this number is large, it suggests a
possible performance bottleneck.

Review the Milliseconds per packet (Avg) counter. Indicates how long (on average) the DHCP
server takes to respond. A large number, and certainly one which is rising, could indicate a
performance problem.
Tip: You can accomplish some basic DHCP monitoring by using the DHCP console.
344
20740 Installation, Storage, and Compute with Windows Server
Considerations for monitoring VMs
These days, most organizations virtualize server workloads. By combining the workloads processed
by multiple physical server computers into VMs running on a single host, you can ensure you don’t
waste server resources. However, if you combine server workloads, make sure you don’t
underspecify their resources.
Note: It’s calculated that servers are over-specified by as much as 40%.
Microsoft provides the Hyper-V Resource Metering tool to enable you to monitor resource
allocation on your VMs. By using this tool, you can monitor each VM for:

Average graphics processing unit (GPU) use.

Average physical memory use, including:
o

o
Minimum memory use.
Maximum memory use.
Maximum disk-space allocation.

Incoming network traffic for a network adapter.

Outgoing network traffic for a network adapter.
Important: Remember to monitor the host server and the guest VMs running on the host.
You can use Windows PowerShell to configure and review resource-metering statistics, using the
following cmdlets on a per-VM basis:

Enable-VMResourceMetering. Starts collecting data.

Disable-VMResourceMetering. Stops resource metering.

Reset-VMResourceMetering. Resets counters.

Measure-VM. Displays statistics for a specific VM.
Lesson 6: Monitor Event Logs
The fundamentals of event logs were described earlier in this module. You should also know how
to use event logs to troubleshoot your Windows Server computers.
By completing this lesson, you’ll achieve the knowledge and skills to:

Describe Event logs.

Describe custom view.

Describe event subscriptions.
345
20740 Installation, Storage, and Compute with Windows Server
Use Server Manager to review event logs
Server Manager displays a summary of event-log data on its Dashboard. For each installed role,
you can select Events to review pertinent events. By default, the Events page displays Critical
events over the last 24 hours for the selected role. However, you can change this to include
additional event-severity levels and adjust the time-period to suit your requirements.
Event information for both the local server and remote servers is available in the All Servers tab. In
the EVENTS section, you can review recent critical events. In the EVENTS section, choose TASKS
and then select Configure Event Data. You can now adjust the severity levels and time period for
your events. Select OK, and then the appropriate events display.
If you prefer, you can select the appropriate role in the navigation pane, such as DHCP, and then in
the details pane, in the EVENTS section, review the related events. Again, you can select TASKS,
and then select Configure Event Data to modify the severity level or timing for your events.
What is a custom view?
Because event logs can contain a very large number of events, it’s often difficult to find the
specific data you’re interested in. However, you can use a custom view to focus on what you
need to investigate.
To create a custom view, in Event Viewer, in the Action pane, select Create Custom View. Then, on
the Filter tab, select the following:

Logged. Choose when you want to review. The default is Any time.

Event level. Select Critical, Warning, Verbose, Error, or Information. None are selected by
default.

By log. Select which logs you’re interested in adding to the view. None are selected by default,
but you can select any or all Windows Logs, and Applications and Services Logs.

By source. Choose the event source. You can select multiple values, and none are selected by
default.
Tip: You choose either By log or By source, not both.

Additional options. You can also select specific event IDs by entering them in a text box or
selecting keywords or choose specific users or computers.
When you’re happy with your selections, select OK. A new dialog box appears. Enter a name and
description to associate with your custom view, and then select OK. When reviewing events in a
custom view, you can also filter the custom view to pinpoint specific events that it displays.
Tip: This new custom view, and any others, are accessible from the Custom Views folder in
the navigation pane.
346
20740 Installation, Storage, and Compute with Windows Server
Demonstration: Create a custom view
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
What are event-log subscriptions?
One of the downsides of using Event Viewer is that it’s manual. You must open Event Viewer on
each server computer to review events. However, there’s a solution. You can create event-log
subscriptions.
An event-log subscription enables you to gather events, filtered if required, to a single
management computer where you can review those events in a single log.
Event-log subscriptions work by using two services:

Windows Remote Management (WinRM). Is enabled on the source computers.

Windows Event Collector services (Wecsvc). Runs on the collector computer.
You can configure your subscriptions to be either collector-initiated or source computer-initiated:

Collector-initiated. Sometimes known as a pull subscription. Use pull subscriptions when you
want to configure many computers to forward the same types of events to a single
management computer.

Source -initiated. Sometimes referred to as a push subscription. In this scenario, you must
create the subscription definition on the source computer, which then sends events to your
management workstation (the collector).
Enable subscriptions
You must complete a number of steps to enable subscriptions for event logs. On the source
computers, run the following command at an elevated command prompt:
winrm quickconfig
On the collector computer, run the following command at an elevated command prompt:
wecutil qc
The final step is to add the computer account of the collector computer to the Event Log Readers
security group on each of the source computers.
After you’ve completed these steps, you should be able to review the collected log data in the
Forwarded Events folder.
347
20740 Installation, Storage, and Compute with Windows Server
Demonstration: Configure an event subscription
For high-level demonstration steps, refer to the Notes, which are in the accompanying PowerPoint
presentation.
Lab 13: Implement WSUS and deploy
updates
Please refer to our online lab to supplement your learning experience with exercises.
Knowledge check
Check your knowledge by answering these questions.
1. What does replica mode in WSUS mean?
2. Which two services are used when you configure event log subscriptions?
3. Which performance-monitoring tool offers the most complete solution to investigating Windows
Server performance issues?
4. In a server that’s running Windows Server 2022 and that shares files, which hardware
resource is most likely to be bottlenecked?
Note: To find the answers, refer to the Knowledge check slides in the accompanying
Microsoft PowerPoint presentation.
348
Thank you for choosing a course from Waypoint Ventures
Responsive and innovative, Waypoint creates learning products that satisfy and inspire.
Beyond the personal transformation of becoming informed, skilled, and possibly certified, your
customer develops trust in your products. We believe that trust leads them to become influencers,
drawing new learners to your products.
Your students are hungry to learn. Waypoint is ready, with our team of subject-matter experts and
instructional designers, to partner with you. We are experts in learning content design and
development, and we can deliver excellence to diverse audiences across diverse platforms. Check
out our brag reel on YouTube.
Contact us anytime
Please call on us directly to give us feedback or learn how we can help you find solutions to your
learning needs.
•
Email: hello@waypoint.ws
•
Phone: (415) 779-8144
•
Website: http://www.waypoint.ws
•
Follow us on LinkedIn Waypoint Ventures LLC | LinkedIn
•
Follow us on Twitter: @WaypointPjM
Download