Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

Contents

1. S2D Micro-Cluster from Thomas-Krenn ................................................................. 3|

2. Building a Hyper-Converged Solution..................................................................... 4|

3. Azure Stack HCI - the Microsoft-certified solutions for Storage Spaces Direct ....... 7|

4. Micro-Node-Cluster from Thomas-Krenn as Azure Stack HCI certified solution ... 10|

5. What about the redundancy in such a compact 2-node solution? ....................... 10|

6. Setting up the Thomas-Krenn S2D micro-cluster.................................................. 12|

7. Configuration of Nested Resiliency and creation of Volumes ............................... 17|

8. Manage the Thomas-Krenn Micro Node Cluster via Windows Admin Center ..... 20|

Installation of the Windows Admin Center........................................................... 21|

Addition of the Thomas-Krenn S2D micro-cluster ................................................ 23|

Installation of the Thomas-Krenn-Extension for the S2D Micro-Cluster ............... 23|

Manage the S2D Micro-Cluster via the Windows


Admin Center and the Thomas-Krenn-Extension ................................................. 25|
thomas-krenn.com | 3

1. S2D Micro-Cluster from Thomas-Krenn


The S2D Micro-Cluster from Thomas-Krenn is a highly Not only Windows Server 2019, but also any previous
available 2-node system that offers amazing Windows Server version may be used in the virtual
performance with a very compact form factor. Despite machines (VMs) - Windows Server also includes a
the compact form factor, the system is fully redundant downgrade right to any previous version in the OEM
so that even the failure of a complete node can be versions.
compensated. The S2D micro-cluster system is based on
powerful hardware that has been fully certified by The new Windows Admin Center from Microsoft is used
Thomas-Krenn. Fully certified means that Thomas-Krenn to manage the S2D Micro-Cluster. And here Thomas-
has the Azure Stack HCI certification for the S2D micro- Krenn.AG has a very special highlight to offer!
cluster. Azure Stack HCI is Microsoft's certification
program for solutions based on Windows Server 2019 Thomas-Krenn.AG is the first manufacturer from
and Storage Spaces Direct (S2D). The extensive and Germany to offer an extension for Microsoft's Windows
standardized certification test for Azure Stack HCI Admin Center in order to be able to manage the
certification ensures that the certified solution meets all hardware of the S2D micro-cluster in addition to the
of Microsoft's quality criteria. Windows Server 2019 pure operating system components.
Datacenter is used as the software stack, so the powerful
features of Storage Spaces Direct (S2D) and Hyper-V are
available. Through the use of Windows Server 2019
Datacenter, there are unlimited virtual Windows Server
usage rights on the nodes of the S2D Micro-Cluster.
thomas-krenn.com | 4

2. Storage Spaces Direct in Windows Server 2019


Datacenter for setting up a Hyper-Converged Solution
Storage Spaces Direct (S2D) is included in Windows By using industry standard servers, the latest
Server 2016 Datacenter and Windows Server 2019 technologies such as powerful new CPUs, fast network
Datacenter. The S2D micro-cluster from Thomas-Krenn cards with 100 Gbit/s port speed and more, NVMe SSDs
is based on Windows Server 2019 Datacenter and is also and persistent memory can be implemented very quickly
certified for the use of this version. In addition, the after market launch.
licenses are already included with the purchase of the 2- S2D offers powerful caching and tiering algorithms to
node system. Industry standard servers with locally and optimize the performance of Storage Spaces Direct.
directly connected drives are used for S2D in order to set Storage Spaces Direct can be configured with at least
up a high-availability, high-performance and scalable two servers - a maximum of 16 nodes are supported in
software-defined storage system. The cost is a fraction an S2D cluster.
of the cost of a typical SAN system.

Illustration : Scalability of S2D clusters (source: docs.microsoft.com)

In this whitepaper we will focus on Thomas-Krenn's S2D


micro-cluster with two nodes. As you can see in the Storage Spaces Direct can be flash-only or hybrid, i.e.
figure above, S2D is extensively scalable and therefore operated with a combination of flash drives and
also suitable for very large workloads and storage traditional hard drives. The S2D micro-cluster solution
capacities. If your requirements are higher than those relies on a combination of SSD drives and hard drives -
that can be realized with the S2D Micro-Cluster, Thomas- very good performance values can be achieved with
Krenn.AG can also offer you larger and more scalable cost-optimized capacity at the same time. The SSD drives
server solutions for Storage Spaces Direct. in the S2D micro-cluster are automatically used as a
cache.
thomas-krenn.com | 5

Illustration : Cache and Capacity Drives with Storage Spaces Direct (source: docs.microsoft.com)

The advantages are obvious when you look at the the capacity level. The framework conditions of the
performance values in relation to the costs of the traditional hard disk - namely a good throughput for
different drive types. With a classic hard drive, sequential write and read operations - are thus optimally
sequential data transfer rates of 150 MB/s are possible. used. Microsoft recommends planning at least 10% of
With random access, such an HDD achieves 150 IOs per the capacity on the capacity level (corresponds to the
second and that at comparatively low costs per gigabyte capacity of the hard disks in the S2D Micro-Cluster) as
of storage space. An SSD offers a possible sequential cache. In the S2D micro-cluster from Thomas-Krenn.AG,
throughput of 600 MB/s - that means it has four times this has already been taken into account in the
the performance of a classic rotating hard drive. It gets configuration variants offered. Thomas-Krenn has also
interesting with the IOs. An SSD enables up to 60,000 IOs taken into account the other requirements for the drives
per second - it is up to 400 times the performance of a for the S2D Micro-Cluster in the configurations offered
hard disk. Especially with many parallel accesses, such as and put them together optimally. In addition to the
is the case with virtualization, IO performance plays a minimum requirement of 10% cache capacity compared
major role. However, the higher performance of the SSD to the usable storage capacity, the flash drives must also
has its price per gigabyte of storage space. Since Storage meet minimum service life requirements. Microsoft
Spaces Direct in the S2D Micro Cluster uses the SSD specifies a Drive-Writes per Day value of at least 3 (3
drives as a cache, the workload benefits from the DWPD) for the cache drives.
performance values of these drives. A sequential
workload is created to move the data from the cache to
thomas-krenn.com | 6
Storage Spaces Direct as a technology offers the option The term “cache” is often associated with volatile
of using HDDs and SSDs as well as NVMe drives and the memory. With Storage Spaces Direct, non-volatile data
new Persistent Memory (PMEM). This enables even center flash drives are used as cache, which even have
higher performance values, but also at higher costs. The additional buffering for their internal cache. This means
S2D Micro-Cluster of Thomas-Krenn.AG is on the that there is no loss of data in the drive's cache either. In
Use of SSDs and HDDs optimized - if you want to use addition, at least two cache drives are built into each
NVMe drives or PMEM, Thomas-Krenn.AG also offers server. In this way, the server can continue to work with
rack servers for Storage Spaces Direct, which can provide the remaining cache drive even if one cache drive fails.
all storage technologies supported by S2D - hybrid and
flash-only.

Illustration: 2-way mirror at Storage Spaces Direct

The picture above shows a 2-way mirror. The figure


shows very nicely that the data redundancy is created
across servers. Should a server fail completely, all data
including the cache data are still available on the
remaining server.

If a cache drive fails in one of the servers, all hard drives


in this server are assigned to the remaining cache drive
and the server continues to work in the S2D cluster.
thomas-krenn.com | 7

3. Azure Stack HCI - the Microsoft-certified solutions for


Storage Spaces Direct
A careful selection of the hardware components is have to be individually certified, but also function
required to use Storage Spaces Direct. There are special perfectly in combination. Microsoft defines the
requirements for connecting the HDDs, SSDs and NVMe hardware requirements very precisely. All components
drives to the server. Through the clever selection of must be Windows Server 2019 certified and they should
network cards, the performance can be increased be listed in the Windows Server Catalog with the
considerably and of course the components not only Software-Defined Data Center (SDDC) certification.

Illustration : S2D micro-cluster from Thomas-Krenn.AG in the Windows Server Catalog


thomas-krenn.com | 8

However, the SDDC certification is a certification of the constellation. That is why Microsoft has certified the
individual components such as server, NIC, HBA etc. The entire solution. Azure Stack HCI is Microsoft's
SDDC certification does not guarantee that the individual certification program for solutions for the use of Storage
certified components will work together in every Spaces Direct on Windows Server 2019.

Illustration : Azure, Azure Stack and Azure Stack HCI (https://www.microsoft.com/hci)

What exactly does Azure Stack HCI mean and how is it Azure Stack HCI is the certified solution for Storage
related to Microsoft Azure? Microsoft Azure is Spaces Direct. Many of the requirements for certification
Microsoft's public cloud service. Extensive IAAS and are identical for Azure Stack and Azure Stack HCI. For
PAAS projects can be implemented in the global Azure Stack HCI certification, the server manufacturer
Microsoft Azure data centers. Microsoft Azure Stack is must configure at least two of its server nodes with
the solution for customers who want to ensure the full Windows Server 2019 and Storage Spaces Direct and
compatibility of their services with Microsoft Azure, subject them to a standardized stress and quality test
whereby the workload can be carried out in a customer's lasting several days. Extensive tests of functionality and
own data center. A so-called Azure Stack Scale Unit performance are carried out. A wide variety of error and
consists of at least four servers with infrastructure. failure scenarios are also simulated. If these tests have
Microsoft Azure Stack is used as an appliance on the been carried out successfully, a listing of the system in
Delivered to customers, i.e. direct access by the the Azure Stack HCI Catalog can be requested from
customer e.g. on the Windows desktop or the Windows Microsoft. Thomas-Krenn.AG has certified different
console of the Windows Server node of the Azure Stack configurations, which can be seen in the following
Scale Unit is not possible. The power of the web-based illustration.
Azure/Azure Stack portal is available for this. Technically,
Windows Server runs on the nodes of an Azure Stack
Scale Unit with Storage Spaces Direct.
thomas-krenn.com | 9

Illustration : Azure Stack HCI solutions from Thomas-Krenn AG


(https://www.microsoft.com/en-us/cloud-platform/azure-stack-hci-catalog?Hardware-partners=Thomas-Krenn.AG)

If you as a customer plan to use Storage Spaces Direct, however, the installation is carried out by the customer.
the urgent recommendation is to use an Azure Stack HCI You also have full access to the Windows components of
certified solution. This is the only way to ensure that the the servers used. Thomas-Krenn.AG also optionally
manufacturer has verified the system in productive offers you a complete installation of S2D on the Azure
operation. As a customer, you can therefore assume that Stack HCI certified systems.
the configuration of Storage Spaces Direct will be
successful. With an Azure Stack HCI certified solution,
thomas-krenn.com | 10

4. Micro-Node-Cluster from Thomas-Krenn as Azure Stack HCI


certified solution
As described in the previous section, the S2D micro- ensure compatibility and reliability, so you get up and
cluster systems from Thomas-Krenn.AG are fully Azure running quickly. For Windows Server 2019 solutions, visit
Stack HCI certified. ThomasKrenn.AG clearly the Azure Stack HCI solutions website” (Source:
distinguishes itself from providers who only use SDDC- https://docs.microsoft.com/en-us/windows-
certified components. Microsoft writes the following on server/storage/storage-spaces/storagespaces-direct-
its website: hardware-requirements)

„For production, Microsoft recommends purchasing a The right solution for the planned workload can be
validated hardware/software solution from our selected via the Azure Stack HCI Catalog:
partners, which include deployment tools and https://www.microsoft.com/en-
procedures. These solutions are designed, assembled, us/cloudplatform/azure-stack-hci-
and validated against our reference architecture to catalog?Hardwarepartners=Thomas-Krenn.AG

5. What about the redundancy in such a compact 2-node


solution?
Storage Spaces Direct offers different resiliency options With S2D under Windows Server 2016, there was only
for the volumes in Storage Spaces Direct. In an S2D two-way mirroring as a resiliency option for 2-node S2D
cluster with three or more nodes, the 3-way mirroring clusters. One or more hard drives or cache drives in a
can be configured. When using 3-way mirroring, two server were allowed to fail without data loss or
complete servers can fail without data loss. Dual parity is downtime. The failure of a complete server could also be
also available from four servers in the S2D cluster. Even compensated. However, two simultaneous and
when using dual parity, two independent, simultaneous independent errors led to downtime. In the event of a
errors can occur without data loss occurring. Dual parity failure of just one hard drive in a server, another hard
offers higher storage efficiency, but requires additional drive could not fail at the same time on the other server.
CPU power for the parity calculation. So-called multi-
resiliency volumes can be created through the In this point in particular, Microsoft has greatly improved
combination of 3-way mirror and dual parity. S2D in Windows Server 2019 for 2-node solutions.
Thanks to the new nested resiliency, two independent
simultaneous errors can occur without the risk of
downtime or data loss.
thomas-krenn.com | 11

As the figure above shows, hard drives can fail scenarios shown in the previous figure, all volumes that
simultaneously in both servers with configured nested have been configured with nested resiliency remain
resiliency without data loss occurring. The failure of a online and the workload can therefore continue to run.
complete server and a further hard drive in the
remaining server can now be compensated. In the two

Illustration: Nested two-way Mirror (source: docs.microsoft.com) slab that was created. With the nested two-way mirror,
a copy is created on the local server on a different disk
When creating volumes in the Storage Spaces Direct for each slab that is generated. In addition, a copy of the
Pool, the capacity of the volume is divided into so-called slab is created on the second server, which has a further
slabs. A slab is 256 MB in size. For each slab, a defined copy on another disc on this second server. There are
number of copies is created based on the defined ultimately a total of four copies for each slab. Ultimately,
redundancy level, which are distributed over the so- this also means that when volumes are configured with
called fault domains. If e.g. If a volume with 2-way nested-twoway mirror, only 25% of the available
mirroring is configured for S2D with two servers, then capacity can be used for data storage. The so-called
exactly one copy is created on the other server for each footprint of a volume is four times its usable capacity.
thomas-krenn.com | 12

Illustration : Nested mirror-accelerated Parity (source: docs.microsoft.com)

With nested mirror-accelerated parity, part of the accelerated parity must therefore be weighed against
capacity is created as a two-way mirror when the volume the higher CPU load.
is created. The rest of the capacity is configured as parity
(comparable to RAID5). All slabs of this combined In most cases, the recommendation will go in the
volume are also mirrored on the second server. In this direction of the nested two-way mirror, since drives with
configuration, the data is first written from the cache to higher capacities are typically less expensive than the
the first storage level, which is configured as a mirror. computing power required for parity calculation.
The advantage of the mirror level is that hardly any
additional CPU load is generated. Within the capacity
level, S2D can migrate the data from the mirror tier to It should be noted that the nested resiliency for S2D in
the parity tier. With this migration, however, the parity Windows Server 2019 can only be used for 2-node S2D
has to be calculated, which leads to an additional CPU configurations.
load. The higher storage efficiency of the nested mirror-

6. Setting up the Thomas-Krenn S2D micro-cluster


So far, the initial setup was only possible via PowerShell, Cluster from Thomas-Krenn is described in the following
but Microsoft is constantly developing the Windows section. This section describes the basic configuration of
Admin Center. The preview version of the new Cluster a Thomas-Krenn S2D Micro-Cluster.
Creation Tool comes with an installation wizard that The recommendation, however, would be to use the
enables Storage Spaces Direct to be installed in a installation service offered by Thomas-Krenn for the
particularly short time. initial configuration. The configuration is carried out by a
A configuration of Storage Spaces Direct via the Windows specialist with extensive know-how in the area of
Admin Center is currently not offered. The Storage Spaces Direct.
administration of an existing Storage Spaces Direct
thomas-krenn.com | 13

After installing Windows Server 2019 Datacenter on both nodes, the following roles and
features must be installed:

• Hyper-V infrastructure server that can be used for such tasks. It is


also possible to operate the domain controller on the
• Failover Clustering
nodes of the S2D micro-cluster itself.
The Datacenter Bridging feature is optional, as iWARP is First, a Windows cluster is configured with the two
used for SMB traffic in Thomas-Krenn's S2D Micro- servers of the S2D micro-cluster. The next steps are
Cluster. carried out in the Failover Cluster Manager.
Alternatively, all of the actions described can also be
After installing Hyper-V and Failover Clustering, all carried out using PowerShell. Before the cluster can be
available Windows updates should be installed and all configured, the future cluster nodes must be validated.
required reboots should be performed. Both nodes of The cluster validation is started via the Failover Cluster
the S2D Micro-Node Cluster must be included in a Manager. After adding the cluster nodes to the cluster
common domain. This can be the existing Active validation assistant, make sure that the requirements for
Directory domain of the company if it is running at least Storage Spaces Direct are also checked - this option must
the Active Directory functional mode "Windows Server be selected manually.
2008". Alternatively, Thomas-Krenn offers an additional

Illustration : Test selection in the cluster validation

After the nodes have been validated, the Windows After the Windows cluster has been successfully
cluster can be configured. As part of the cluster configured, a ‘Witness’ must be configured.
configuration, a DNS name for the cluster and an IP
address must be specified. It should be noted that the IP
address of the cluster can only be configured in the
subnet for which a standard gateway is entered on the
physical interfaces of the nodes.
thomas-krenn.com | 14

In the case of an error, the Windows Cluster is about The witness also has a voice. So there are a total of three
being able to form a majority in order to avoid so-called votes in the two-node cluster.
split-brain situations. In order to form the majority of the
votes described, each cluster node has one vote.

Illustration: Votes in the two-node cluster (source: docs.microsoft.com)

Should a node in the two-node cluster fail, the remaining It is often said that a witness is only needed in an even-
node checks whether it can reach the witness. If the numbered cluster. However, this is not correct. A witness
witness can be reached, the remaining node and the should also be configured in a cluster with an odd number
witness have two of a total of three votes and can thus of nodes. Using the so-called dynamic quorum, the cluster
form a majority of the votes. The remaining server would can or may not give the witness a vote as required. In the
handle the entire workload in the cluster. case of a 3-node cluster with configured witnesses, the
Provision is also made for the scenario in which one of the witness has no vote during normal operations. If a node
two nodes should only be isolated on the network side. If fails, the witness is given a vote so that the cluster can
this scenario occurs, both server nodes would try to reach compensate for the failure of another node.
the witness. The node isolated on the network side will The term "quorum" is often confused or confused with the
not reach the witness - this node has one of a total of three witness.
votes and therefore cannot form a majority. All workload “Quorum” means “majority”. As described above, the
on this isolated server would automatically end. When cluster is always about forming a majority (the quorum);
this happens can be configured using the appropriate the majority is formed from the votes of the cluster nodes
parameters in PowerShell. When the second server and the vote of the witness.
If the node reaches the witness, he can form a majority In earlier Windows versions the concept of majority
with the witness (two out of three votes). In this scenario, formation in the cluster was different from the one
all clustered roles and services (the VMs in the Hyper-V described above - before Windows Server 2008 there was
cluster) would be started on the second server node. a quorum disk, which was solely responsible for forming
Depending on the configured values, this happens either the majority. This has changed since Windows Server 2008
immediately or after a certain waiting time. and behaves as described above.
If the two server nodes cannot reach each other and The witness can be configured using PowerShell or the
neither of the server nodes can reach the witness, all Failover Cluster Manager. The wizard for configuring the
clustered services are terminated on both cluster nodes. “Cluster Quorum Settings” can be configured in the
Three simultaneous errors cannot be compensated for in Failover Cluster Manager by right-clicking on the name of
the two-node cluster. the cluster and selecting the “Configure Cluster Quorum
Settings” option.
As iSCSI or Fiber-Channel targets are no longer allowed to
be used on the cluster nodes after the activation of S2D,
the data carrier witness is no longer a possible option.
thomas-krenn.com | 15

Illustration: Options for witness configuration in Windows Server 2019

For S2D clusters, the recommended option would be the The USB stick is not provided by one of the server nodes,
File Share Witness. Such a FileShare witness must be but by a central device in the network, e.g. a router with a
configured on an SMB3 target. Alternatively, it would USB port.
also be possible to have a witness via Microsoft Azure. It The following figure shows the configuration in the
is completely new in Windows Server 2019 to display the Failover Cluster Manager with a file share witness on a
witness via a USB stick released in the network - this Windows server target in the network.
option can only be configured via PowerShell.

After setting up the Windows Cluster and configuring the This is e.g. possible in the Server Manager under "Server
File Share Witness, it is advisable to check the disks that Manager\File and Storage Services\Volumes\Storage
are planned for use for Storage Spaces Direct. Pools". Select “Primordial” for the respective node to see
the disks for this server under “Physical Disks”.

Illustration : Physical Disks in the Server Manager


thomas-krenn.com | 16

In the Azure Stack HCI certified S2D MicroCluster systems, Since the S2D micro-cluster of Thomas-Krenn.AG has
the disk layout always corresponds to that shown above - passed the extensive Azure Stack HCI certification, it can
the capacities of the drives can, however, vary. The be assumed that no problems will arise in this direction.
smallest SSD in the system is for the operating system -
this is already installed there. The two larger SSDs are used After the drives have been checked on both servers, the
as cache drives after the configuration of S2D. The four S2D can be configured. A PowerShell session with
hard disks (HDD) form the capacity storage tier after the administrative rights is required for this. I recommend
S2D has been configured. If the configuration shows fewer starting the PowerShell ISE with administrative rights.
drives, it should be checked that all drives are correctly
connected to the server. If a drive with an unknown Media
Type is listed, this indicates a problem with the firmware.

Illustration: Enable-ClusterS2D in der PowerShell ISE

"Enable-ClusterS2D" activates Storage Spaces Direct and automatic configuration, it is essential to take a look at
configures the storage pool for S2D. The S2D pool the specified log file. It is important that no errors are
contains all disks that are suitable for S2D - with the listed there and that the number and type of discs are
exception of the OS disks. In addition, the storage tiers correct.
are created automatically. After completing the

Illustration : Disks claimed summary in Enable-ClusterS2D Log


thomas-krenn.com | 17

The number of disks in the S2D Micro-Cluster must A check of the disks in the Server Manager under "Server
correspond to the above list, the "Friendly Names" may Manager\File and Storage Services\Volumes\Storage
differ, as there are different configurations of the S2D Pools" shows a comparable result when the storage pool
Micro-Cluster with different capacities. Two drives per "S2D on CLUSTERNAME" is selected.
server are listed as "Disks used for cache -> True" and
four of the drives as "Disks used for cache -> False",
which means that these four drives are used for the
capacity storage level.

Illustration : Disks in the S2D pool in the Server Manager

It is important that there is only one pool with Storage be in the one central storage pool. The capacities are
Spaces Direct! No further storage pools may be created divided up using volumes.
in the S2D cluster! All disks that are used for S2D must Several volumes with different resiliency options can be
created - more on this in the following sections.

7. Configuration of Nested Resiliency and creation of Volumes


In the 2-node S2D cluster, only two-way mirroring was two-way mirroring is also active as standard in Windows
available under Windows Server 2016. With Windows Server 2019. The nested resiliency currently has to be
Server 2019 there is a new Nested-Resiliency options. configured manually.
Using Nested Resiliency, two independent and First of all, the currently available storage tiers should
simultaneous errors can occur without data loss or be queried via PowerShell and the Get-StorageTier
downtime occurring. E.g. one hard disk fails in each of command.
the two servers at the same time. Even if a complete
server fails, an additional hard disk may fail on the
remaining server. It should be noted, however, that
thomas-krenn.com | 18

In the two-node cluster under Windows Server 2019, the output looks like this:

Illustration: List of default storage tiers in the S2D micro-cluster

In the two-node S2D cluster under Windows Server In the case of S2D clusters with more than two nodes,
2019, two storage tiers are automatically created - there are additional storage tiers that are not considered
"Capacity" and "MirrorOnHDD". here.
Under Windows Server 2016 there was only one storage
level with the name "Capacity" in the two-node S2D As described in the chapter on the redundancy options
cluster. Under Windows Server 2019, Microsoft has in the 2-node cluster, it is recommended to select the
switched to specifying the resiliency type and the drive nested two-way mirror. With the nested two-way
type in the name of the storage tier. This improves the mirror, a two-way mirror is created locally and again
clarity. "MirrorOnHDD" means that there is a storage across the server. The footprint of a volume is four times
level with two-way mirror on the hard drives. The SSDs the usable capacity of the volume, but the advantages of
are used entirely as a cache and are therefore not part double redundancy with simultaneously lower load on
of the capacity level. The "Capacity" storage level was the CPU outweigh this. In order to be able to use the
created for downward compatibility. This means that the nested two-way mirror, a new storage tier must first be
two storage tiers in the screenshot above offer identical created. This is done via PowerShell, as shown in the
resiliency and use the same physical storage. screenshot below.

Illustration: Creation of the new storage level for nested two-way mirror
thomas-krenn.com | 19

For an easier transfer to the PowerShell ISE, the syntax for creating the new memory level is reproduced here in
full:

# Create Storage Tier for Nested Mirror


New-StorageTier -StoragePoolFriendlyName S2D* -FriendlyName NestedMirror `
-ResiliencySettingName Mirror -MediaType HDD -NumberOfDataCopies 4

The value “4” for “NumberOfDataCopies” means that a


local copy and two copies are created on the second
server for each slab. Hard disks may fail on both servers
at the same time, and if one server fails or is serviced,
one hard disk may fail in the remaining server.

The result after querying Get-StorageTier now looks like this:

Illustration: Storage tiers after creating the additional storage level for nested two-way mirror.

Volumes can now be created with the new version 1904 of the Windows Admin Center, volumes
ResiliencyLevel nested two-way mirror. The following with nested resiliency can be managed, but not created.
chapter describes the administration of the S2D Micro- PowerShell must therefore be used to create volumes
Cluster with the Windows Admin Center. In the current with nested resiliency.
thomas-krenn.com | 20

The following command creates a new volume with the name "Volume01" and nested two-
way mirror as the resiliency level:

Illustration : PowerShell ISE to create a new volume

The following is the syntax for creating a new volume with a nested two-way mirror so that it can be copied and
pasted directly:
New-Volume -StoragePoolFriendlyName S2D* -FriendlyName Volume01 `
-StorageTierFriendlyNames NestedMirror -StorageTierSizes 1000GB

Microsoft recommends creating an integer multiple of volumes for each S2D cluster node. Two or four volumes should
therefore be created in the 2-node S2D cluster. Although all nodes can always read and write access to all volumes,
the aforementioned configuration in the 2-node cluster leads to better possible performance than with one or three
volumes.

The syntax for creating another volume is the same as the previous one - only a different name is selected as the
volume label:
New-Volume -StoragePoolFriendlyName S2D* -FriendlyName Volume02 `
-StorageTierFriendlyNames NestedMirror -StorageTierSizes 1000GB

The values for "StorageTierSizes" (corresponds to the capacity of the volume that is being created) must be adapted
to the respective existing capacities and the existing requirements.

8. Manage the Thomas-Krenn Micro Node Cluster


via Windows Admin Center
Windows Admin Center is the new, web-based Admin Center can also be installed on a Windows 10
administration interface for Windows Server, which is client PC from version 1803. The Windows Admin Center
offered free of charge by Microsoft. The Windows Admin page is then called up via the local web browser on the
Center is typically based on a management or Windows 10 client PC.
Infrastructure server installed and accessed via a In this section we are working with the current Windows
compatible web browser. Such an infrastructure server Admin Center version 1904 and the extension specially
is offered by Thomas-Krenn as part of the S2D Micro- developed for the S2D Micro Cluster.
Cluster ordering process. Alternatively, the Windows
thomas-krenn.com | 21

Installation of the Windows Admin Center


The Windows Admin Center can be downloaded free of charge from Microsoft under the following link:

https://docs.microsoft.com/en-us/windows-server/manage/windows-admin-
center/understand/windows-admin-center

Installation on a management server is recommended. In After downloading the Windows Admin Center, it can be
the following steps it is assumed that the Windows installed on the planned management server. The option
Admin Center is installed on a virtual or physical “Allow Windows Admin Center to change the settings of
Windows Server 2019, which is used as a management this computer's trusted hosts” should be selected. If this
server. As described above, there are further installation checkbox is not activated, each server that is to be
options for the Windows Admin Center, which are managed via the Windows Admin Center must be added
described in detail at Microsoft: manually to the list of trusted hosts.
https://docs.microsoft.com/en-us/windows-
server/manage/windows-admin-center/deploy/install

Illustration: Configuration of the trusted Hosts in the Windows Admin Center setup
thomas-krenn.com | 22

The settings for secure communication are another important step in the
Windows Admin Center setup.

Illustration: Settings for Certificates in the Windows Admin Center setup

The recommendation is to use an SSL certificate the setup has a term of 60 days. The certificate used by
installed on the computer. This certificate can come the Windows Admin Center can be exchanged in the
from an internal or external trustworthy certification Control Panel in the "Software" area by calling up the
authority. The certificate must be installed in advance change setup for the Windows Admin Center.
via the certificate management on the computer on
which the Windows Admin Center is installed. A self- After the installation of the Windows Admin Center is
signed SSL certificate can be created as part of the setup complete, the setup displays the URL via which the
for test purposes. The self-signed certificate created by Windows Admin Center website can be accessed.

Illustration: Access URL in Windows Admin Center setup


thomas-krenn.com | 23

Addition of the Thomas-Krenn S2D micro-cluster


After opening the Windows Admin Center website for Clicking the “+” on the Windows Admin Center start
the first time, the Thomas-Krenn S2D Micro-Cluster page starts the wizard for adding new connections.
must be added in order to be able to manage it.

Illustration: Windows Admin Center start page in the web browser

"Hyperconverged cluster" must be selected on the specified. In addition, it is recommended to select the
right side of the screen. In addition, the DNS name of option "Also add servers in the cluster" to add the two
the S2D micro-cluster in the domain network must be S2D micro-cluster nodes to the list of connections.

After successful addition, both the S2D Micro-Cluster itself and the two server nodes are displayed in the
Connections list.

Installation of the Thomas-Krenn-Extension for the S2D Micro-


Cluster
One of the many highlights of the S2D micro-cluster to the basic functionalities of the Windows Admin
developed by Thomas-Krenn.AG is the unique Center, extended functions can be used specifically for
extension for the Windows Admin Center. In addition the S2D micro-cluster.
thomas-krenn.com | 24

Illustration: Access to the settings of the Windows Admin Center officially available extensions for the Windows Admin
To install the Thomas-Krenn extension for the Windows Center are displayed under "Extensions". The list of
Admin Center to manage the S2D Micro-Cluster, the extensions is published by Microsoft. The Thomas Krenn
Windows Admin Center settings page is opened via the extension can also be found in the list and can be
gear icon on the Windows Admin Center page. All installed at no additional cost.

Illustration: Selection and installation of the Thomas-Krenn-Extension for the S2D Micro-Node Cluster

The Thomas-Krenn.AG extension is selected for will be displayed. In this case, you can easily switch
installation. Clicking on "Install" will automatically back to “Available extensions” in order to install the
install it. If you accidentally click on "Installed Thomas-Krenn.AG extension.
Extensions", a list of the already installed extensions
thomas-krenn.com | 25

Manage the S2D Micro-Cluster via the Windows


Admin Center and the Thomas-Krenn-Extension
To manage the S2D Micro-Cluster from Thomas- following illustration, this is done by clicking on
Krenn.AG via the Windows Admin Center, it can simply "mncluster01.mhdemolab.de"
be clicked on in the "All connections" list. In the

Illustration: Access to the S2D Micro-Node Cluster via the Windows Admin Center

Initially opened "Dashboard" view of the Hyper-Converged Cluster Manager.

Illustration: Windows Admin Center Dashboard view for the S2D Micro-Node Cluster
Information on current alerts is displayed there. There is By clicking on “Drives” in the list of tools, the overview
also an overview of the servers, physical drives, virtual page of the drives is opened first. The following
machines and volumes. In the lower area of the previous illustration shows a 2-node S2D micro-node cluster with
screenshot, the performance values for CPU usage and a total of 12 drives.
memory usage are shown.
thomas-krenn.com | 26

Illustration: List of drives for the S2D Micro-Cluster data availability, since a corresponding redundancy was
configured when the volumes were created (the failure
There are no alerts in the previous illustration. However,
of further hard disks or a complete server also has no
warnings and critical states would also be displayed on
effect on data availability, as the volumes confi gured
this page if there were problems with the drives. The with the Nested Two-Way Mirror in one of the previous
information in the “Capacity” area is interesting. You can
steps). Since only 3 TB were occupied by volumes on the
read there how much of the physical memory is occupied
failed hard drive, full redundancy can be restored with
by volumes - this view does not show the occupancy of
the three remaining drives and their reserve capacity of
the volumes themselves. On the right-hand side of the 1 TB each. And since this is done via a so-called parallel
“Capacity” bar there is a hatched area - referred to as
rebuild, this is also done very quickly.
“Reserve” in the legend - in the above case with 7.28 TB.
It is very important to properly understand and properly It is very important to understand that keeping reserve
account for the reserve capacity in Storage Spaces capacity free is a recommendation. The storage space is
Direct. Storage Spaces Direct does not recognize hot not blocked by the system as a fixed reserve. The display
spare drives. By definition, a hot spare drive is a drive of the reserve capacity in the Windows Admin Center
which “steps in” if another drive fails in order to restore helps to keep this reserve free when creating volumes. If
full redundancy. A good quality in itself, but quite no reserve is required for a local and parallel rebuild, this
inefficient. Because the hot spare drive must be kept can also be used to create volumes - which is not
permanently available and cannot be used for the recommended, however!
productive workload. Storage Spaces Direct, on the
other hand, has a more efficient approach. All drives in
the S2D pool are actively used. In order to be able to
restore full redundancy in the event of a drive failure, it
is possible to keep a certain reserve area free for rebuild
activities on all drives. This can be illustrated very well
with the following calculation example: With a
configuration with two S2D micro-node cluster nodes
and four capacity drives with 4 TB each per server, a
reserve of approx. 4 TB would be created per server,
which corresponds to 1 TB per hard disk (in In this
simplified view, it is assumed that the usable capacity of
a 4 TB hard drive is actually 4 TB). If one of these hard
disks should fail, this has no effect on the operation and
thomas-krenn.com | 27

Illustration : View detailed information on drives in the S2D Micro-Node Cluster

By clicking on "Inventory" a list with detailed information on the drives can be called up. These can be grouped
according to various criteria using the "Group" symbol on the right above the list. For example, using the servers, as
shown in the following screenshot.

Illustration : Grouping drives by server in Windows Admin Center

The list of drives shows detailed information such as By selecting “Volumes” in the tools list, an overview of
serial number, model, size, usage, used capacity and the existing volumes is displayed. By clicking on
status. The “inventory” view of the drives is particularly “Inventory”, detailed information on the volumes is
important when it comes to identifying a failed drive. called up. The following screenshot shows the two
The Thomas-Krenn-Extension for the Windows Admin previously created volumes “Volume01” and
Center can provide even better support here. The “Volume02” as well as the “ClusterPerformanceHistory”
Thomas-Krenn-Extension is explained in more detail on volume.
the following pages.
thomas-krenn.com | 28

Illustration: Access to volumes in the Windows Admin Center

The “ClusterPerformanceHistory” volume is created paragraphs. When creating volumes via the Windows
automatically when Storage Spaces Direct is activated Admin Center, it currently only allows two-way mirroring
and saves information on the individual S2D components to be set up in a 2-node cluster. Since the nested two-
and the associated performance values. Thus e.g. In the way mirror has considerable advantages in terms of
dashboard views of the Windows Admin Center, not only redundancy, the current recommendation would be to
the current activity, but also a history with a daily, create volumes in a 2-node cluster with nested resiliency
weekly or monthly resolution can be displayed. After a using PowerShell.
click on e.g. “Volume01” shows detailed information. The subsequent administration can then take place via
The following screenshot shows the previously the Windows Admin Center.
configured resiliency as a nested two-way mirror. It
should be noted that this is displayed correctly in the
Windows Admin Center version 1904, but new volumes The volume details page also shows the size and the
with Nested Two-Way Mirror can only be created using associated footprint. Here you can see very nicely that
PowerShell, as already described in one of the previous with the nested two-way mirror, four times the usable

Illustration : Manage volumes in the Windows Admin Center


thomas-krenn.com | 29
storage capacity is occupied by the volume due to the some great added value in the Windows Admin Center
double redundancy. Advanced options such as the for managing the S2D Micro-Cluster. After clicking on
integrity checksum or data deduplication can also be “S2D Micro-Cluster” in the tools list of the
configured here. Data deduplication can also be HyperConverged Management, an overview of the S2D
activated for volumes with the ReFS file system under Micro-Cluster from Thomas-Krenn.AG is loaded. The
Windows Server 2019 if the data deduplication role Windows Admin Center receives with a clear
component is installed on all server nodes of the S2D representation of the S2D micro-cluster. The advantages
Micro-Cluster. of the extension become immediately apparent - there
is not only a technical list of the components, but also a
At the beginning of the section on Windows Admin graphic representation and visualization of the
Center, the installation of the Thomas-Krenn Extension components corresponding to the original system.
for the S2D Micro-Cluster for the Windows Admin Center
was described. This extension now provides us with

Illustration: Thomas-Krenn-Extension for the S2D Micro-Nodes in the Windows Admin Center

The components shown are "active", which means, for object. The active components can also change colour in
example, that the drives or the network connections can order to provide a visual alert if there is a problem with
be clicked to call up further information on the respective one of the components.
thomas-krenn.com | 30

Illustration: Cluster details in the Thomas Krenn Extension for the Windows Admin Center

If e.g. If the “Cluster” component is clicked, further In the screenshot below, a network error was
information on the cluster can be called up in the area simulated. The "Network" component is displayed in
under the S2D Micro-Cluster nodes. red and since it is a simulated error with the network
interface of the server on the left side of the screen,
this is also displayed in red.

Illustration: Network with simulated error in the Thomas-Krenn-Extension for the Windows Admin Center

Drive errors and the failure of other system components Extension from Thomas-Krenn.AG makes the S2D
such as Fans are visualized in this way in order to Micro-Cluster an easy-to-manage and uniquely
simplify troubleshooting. The Windows Admin Center compact and powerful system.
thomas-krenn.com | 31

You might also like