Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Lascon StorageCookie PolicyContact usSearch this Site

HOME
Backups

CLOSE





Hardware

CLOSE





Mainframe

CLOSE




Windows

CLOSE



Databases

CLOSE




Strategy

CLOSE

Click on the grey buttons above to open an overlay menu that shows the areas in each major section.
Click on the yellow buttons to the right to move between pages in this area.

STORAGE SPACES
For most companies, data is their lifeblood and service has to be available 24 hours a day, every day.
Hardware fails and can cause both loss of service and data loss. Server clustering can protect your
service from loss of a server, and hardware RAID, explained here, can protect you from disk failure.
Microsoft do server clustering, but they do not sell hardware, so they brought out Storage Spaces,
which is a bit like RAID, but implemented in software. You usually need at least three physical drives,
which you group together into a storage pool and then create virtual volumes called Storage Spaces.
These Storage Spaces contain extra copies of your data so if one of the physical drives fail, you have
enough spare copies of your data to be able to read it all. As a bonus, your data is striped over
several disks, which improves performance.
One important restriction of Storage Spaces is that you cannot host your Windows operating system
on it.

There are three major ways to use Storage Spaces:

On a Windows 10 PC
On a single stand-alone server
On a clustered server using Storage Spaces Direct

The following is a list of new Storage Spaces Direct features that appeared with Windows Server
2019.

 Deduplication and compression for ReFS volumes


 Native support for persistent memory
 Nested resiliency for two-node hyper-converged infrastructure at the edge
 Two-server clusters using a USB flash drive as a witness
 Windows Admin Center support
 Performance history
 Scale up to 4 PB per cluster
 Mirror-accelerated parity is 2X faster
 Drive latency outlier detection
 Manually delimit the allocation of volumes to increase fault tolerance

Resilience Levels

Storage Spaces has three basic levels of resilience, or the ability to cope with hardware failure.

Simple Resilience, equivalent to RAID0, just stripes data across physical disks. It requires at least one
physical disk, and can improve performance but has no resilience. yes, that's right. If you lose a disk
you lose all your data. Simple Resilience has no capacity overhead. It is basically just suitable for
temporary data that you can afford to lose, data that can be recreated easily if lost, or for applications
that provide their own resilience.
Mirror Resilience, equivalent to RAID1 plus, stores either two or three copies of the data across
multiple physical disks. A 2 mirror implementation needs at least 2 physical disks, and you can lose
one disk and stll have the data intact on the other one. You will also lose at least half of your raw disk
capacity. As the data is striped between the disks, there will be a performance benefit.
A 3 way Mirror needs at least five physical disks but it will protect from two simultaneous disk failures.
Mirror Resilience uses dirty region tracking (DRT) to track data updates to the pool disks in the pool. If
the system crashes, then the mirrors might not be consistent. When the spaces are brought back
online, DRT is used to synchronise the disks again.
Mirror Resilience is the most widely used Storage Spaces implementation.

Parity Resilience, equivalent to RAID5, stripes data and parity information across physical disks. It
uses less capacity than Mirror Resilience, but still has enough redundant data to survive a disk crash.
Parity Resilience uses journaling to prevent data corruption if an unplanned shutdown occurs.
It requires at least three physical disks to protect from single disk failure.
It is best used for workloads that are highly sequential, such as archive or backup.

Storage Spaces on a Windows 10 PC

To configure Storage Spaces on PC you need at least two extra physical drives, apart from the drive
where Windows is installed. These drives can be SAS, SATA, or even USB drives (but it seems to me
to be intuitively bad to use removable storage for resilience!), and can be hard drives or solid state
drives. Once you have your drives installed, go to

Control Panel > System and Security > Storage Spaces

Select 'Create a new pool and storage space'.


Select the drives you want to add to the new storage space, and then select 'Create pool'.
Give the drive a name and letter, and then choose a layout. See above for the meaning of Simple,
Two-way mirror, Three-way mirror, and Parity.
Enter the maximum size the storage space can reach, and then select 'Create storage space'.

Once you have Storage Spaces configure and working, you can add extra drives for more capacity.
When you add a drive to your storage pool, you will see a check box, 'Optimize'. Make sure that box is
checked, as that will optimise your existing data by spreading some of it over the new drive.
If you want to remove an existing drive from a storage pool, first you need to make sure you have
enough space capacity in the pool to hold the data. If there is enough capacity, go to the Storage
Spaces screen above, then select

Change settings > Physical drives

which will list all the drives in your pool. Pick out the one you want to remove then select

Prepare for removal > Prepare for removal

Windows will now move all the data off the drive, which could take a long time, several hours if you
are removing a big drive. If you are running from a laptop, leave it plugged in, and change the Power
and Sleep settings so the PC does not go to sleep when plugged in. Once all the data is migrated, the
drive status will change to 'Ready to remove' so select

Remove > Remove drive


Storage Spaces On a single stand-alone server

To create a storage space, you must first create a storage pool, then a virtual disk, then a volume.

CREATE A STORAGE POOL

A storage pool is a collection of physical disks. There are restrictions on what type of disks and disk
adaptors can be used. Check with Microsoft for up to date requirements, but in summary the disks
need to be:

Disk bus types can be SAS, SATA and also USB drives, but USB is not a good idea in a server
environment. iSCSI and Fibre Channel controllers are not supported.
Physical disks must be at least 4 GB and must be blank and not formatted into volumes.
HBAs must be in non-RAID mode with all RAID functionality disabled

The minimum number of physical disks needed depends on what type of resilience you want. See the
resilience section above for details. Once you have your physical disks connected, go to

Server Manager
File and Storage Services.
Storage Pools

This will show a list of available storage pools, one of which should be the Primordial pool. If you can't
see a primordial pool, this means that your disks do not meet the requirements for Storage Spaces. If
you select the Primordial storage pool, you will see a listing of the available physical disks.
Now select STORAGE POOLS > TASKS > New Storage Pool, which will open the 'New Storage Pool'
wizard.
Follow the wizard instructions, inputting the storage pool name, the group of available physical disks
that you want to use, and then select the check box next to each physical disk that you want to
include in the storage pool. You can also designate one or more disks as hot spares here.

CREATE A VIRTUAL DISK

Next, you need to create one or more virtual disks. These virtual disks are also referred to as storage
spaces and just look like ordinary disks to the Windows operating system.

Go to Server Manager > File and Storage Services > Storage Pools > VIRTUAL DISKS > TASKS list
> New Virtual Disk and the New Virtual Disk Wizard will open. Follow the dialog, selecting the storage
pool, then entering a name for the new virtual disk. Next you select the storage layout. This is where
you configure the resiliency type (simple, mirror, or parity) and on the 'Specify the provisioning type'
page you can also pick the provisioning type, which can be thin or fixed.
Thin provisioning uses storage more efficiently as it is only allocated as it is used. This lets you
allocate bigger virtual disks than you have real storage, but you need to keep a close eye on the
actual space they are using on an on-going basis, otherwise your physical disks will run out of space.
With fixed provisioning you allocate all the storage capacity at the time a virtual disk is created, so the
actual space used is always the same as the virtual disk size.
You can create both thin and fixed provisioned virtual disks in the same storage pool.

So the next step is to specify how big you want the virtual disk to be, which you do on the 'Specify the
size of the virtual disk page'. The things is, unless you are using a Simple storage layout, your virtual
disk will need more physical space than the size of the disk, as there is an overhead for resilience.
You have a choice here -
You can work out how much space your virtual disk will actually use, and see if it will fit into the
storage pool
You can select the 'Create the largest virtual disk possible' option, then if the disk size you pick is too
large, Windows will reduce the size so it fits.
You can select the 'Maximum size' option, which will create a virtual disk that uses the maximum
capacity of the storage pool.

CREATE A VOLUME

You can create several volumes on each virtual disk. To create a new volume, right-click the virtual
disk that you want to create the volume on from the VIRTUAL DISKS screen. Now select 'New
Volume' which will open the 'New Volume' wizard.

Follow the Wizard dialog, picking out the server and the virtual disk on which you want to create the
volume. Enter your volume size, and assign the volume either a drive letter or a folder page. You also
select the kind of file system you want, either NTFS or ReFS then optionally the allocation unit size
and a volume label.

Once the volume is created, you should be able to see it in Windows Explorer.

Storage Spaces Direct on a Windows Cluster

Clusters, Servers and Network

Storage Spaces Direct uses one or more clusters of Windows servers to host the storage. As is
standard in a Windows Cluster, if one node in the cluster fails, then all processing is swapped over to
another node in the cluster. The individual servers communicate over Ethernet using the SMB3
protocol, including SMB Direct and SMB Multichannel. Microsoft recommends that you use 10+ GbE
with remote-direct memory access (RDMA) to provide direct access from the memory of one server to
the memory of another server without involving either server's operating system.
The individual servers use the Windows ReFS filesystem as it is optimised for virtualization. ReFS
means that Storage Spaces can automatically move data in real time between faster and slower
storage, based on its current usage.
The storage on the individual servers is pulled together with a Cluster Shared Volumes file system,
which unifies all the ReFS volumes into a single namespace. This means that every server in the
cluster can access all the ReFS volumes in the cluster, as though there were mounted locally.
If you are using converged deployments, then you need a Scale-Out File Server layer to provide
remote file access using the SMB3 access protocol to clients, such as another cluster running Hyper-
V, over the network.

Storage

The physical storage is attached to, and distributed over all the servers in the cluster. Eash server
must have at least 2 NVMe attached solid-state drives and at least 4 slower drives, which can SSDs
or spinning drives that are SATA or SAS connected, but must be behind a host-bus adapter (HBA)
and SAS expander. All these physical drives are in JBOD or non-RAID format. All the drives are
collected together into a storage pool, which is created automatically as the correct type of drives are
discovered. Microsoft recommends that you take the default settings and just have one Storage Pool
per cluster.

Although the physical disks are not configured in any RAID format, Storage Spaces itself provides
fault tolerance by duplicating data between the different servers in the cluster in a similar way to
RAID. The different duplication options are 'mirroring' and 'parity'.

Mirroring is similar to RAID-1 with complete copies of the data stored on different drives that are
hosted on different servers. It can be implemeted as 2-way mirroring or 3-way mirroring, which will
require twice as much, or three times as much physical hardware to store the data. However the data
is not just simply replicated onto another server. Storage Spaces splits the data up into 256 MB
'slabs', then writes out 2 or 3 copies of each slab out to different disks on different servers. A large file
in a 2-way mirror will not be written to 2 volumes, but will be spread over every volume in the pool,
with each pair of 'mirrored' slabs being on separate disks hosted on separate servers. The advantage
of this, is that a large file can be read in parallel from multiple volumes, and if one volume is lost, it can
be quickly reconstructed by reading and rebuilding the missing data from all the other volumes in the
cluster.
Storage Spaces Mirroring does not use dedicated or 'hot' spare drives to rebuild a failed drive. As the
capacity is spread all over the drives in the pool, so the spare capacity for a rebuild must be spread
over all the drives. If you are using 2 TB drives, then you have to maintain at least 2 TB spare
capacity in your pool, so a rebuild can take place.

A Parity configuraton comes in two flavours, Single Parity and Dual Parity, which can be considered
equivalent to RAID-5 and RAID-6. You need some expertise in maths to fully understand how these
works, but in simple terms, for single parity, the data is split up into chunks and then some chunks are
combined together to create a parity chunk. All these chunks are then written out to different disks. If
you then lose one chunk, it can be recreated by manipluating the remaining chunks and the parity
chunk.
Single parity can only tolerate one failure at a time and needs at least three servers with associated
disks (called a Hardware Fault Domain). The extra space overhead is similar to three-way mirroring,
which provides more fault tolerance, so while single parity is supported, it would be better to use
three-way mirroring.
Dual parity can recover from up to two failures at once, but with better storage efficiency than a three
way mirror. It needs at least four servers and with 4 servers, you just need to double up on the
amount of allocated storage. So you get the resilience benefits of three way mirroring for the storage
overhead of two way mirroring. The minimum storage efficiency of dual parity is 50%, so to store 2 TB
of data, you need 4 TB of physical storage capacity. However, as you add more hardware fault
domains, or servers with storage, the storage efficiency increases, up to a maximum of 80%. For
example, with seven servers, the storage efficiency is 66.7%, so to store 4 TB of data, you need just 6
TB of physical storage capacity.
An advanced technique called 'local reconstruction codes' or LRC was introduced in Storage Spaces
Direct, where for large disks, dual parity uses LRC to split its encoding/decoding into a few smaller
groups, to reduce the overhead required to make writes or recover from failures.

The final piece in the Storage jigsaw is the Software Storage Bus. This is a software-defined storage
fabric that connects all the servers together so they can see all of each other's local drives, a bit like a
Software SAN. The Software Storage Bus is essential for caching, as described next.

Cache

What Microsoft calls a service side cache, is essentially a top-most disk tier, usually consisting of
NVMe connected SSD drives. When you enable Storage Spaces Direct, it goes out and discovers all
the available drives, then automatically selects the faster drives as the 'cache' or top tier. The lower
tier is called the 'capacity' tier. Caching has a storage overhead which will reduce your usable
capacity.
The different drive type options are:

 All NMVe SSD; the best for performance and if the drives are all NMVe, then there is no
cache. NMVe is a fast SSD protocol where the drives are attached directly to the PCIe bus
 NMVE + SSD; The NMVe drives are used as cache, and the SSD drives as capacity. Writes
are staged to cache, but reads will be served from the SSDs unless the data has not been
destaged yet
 All SAS/SATA attached SSD; there is no automatically configured cache but you can decide
to configure one manually. If you run without a cache then you get more usable capacity
 NVMe + HDD; both reads and writes are cached for performance and data is destaged to the
HDD capacity drives as it ages
 SSD + HDD; as above, both reads and writes are cached for performance. If you have a
requirement for large capacity archive data, then you can use this option with a small number
of SSDs and a lot of HDDs. This gives you adequate performance at a reasonable price.

When Storage Spaces Direct de-stages data, it uses an algorithm to de-randomise the data, so that
the IO pattern looks to be sequential even if the original writes were random. The idea is that this
improves the write performance to the HDDs.
It is possible to have a configuration with all three types of drive, NMVe, SSD and HDD. If you
implement this, then the NMVe drives become a cache for both the SSDs and the HDDs. The system
will only cache writes for the SSDs, but will cache and both reads and writes for the HDDs.

Deploying Storage Spaces Direct

There are two different ways to implement Storage Spaces Direct, called 'Converged' or
'Disaggregated', and 'Hyper-Converged'. I don't think there is an agreed definition of these terms, but
one is:
A converged infrastructure runs with a pool of virtualized servers, storage and networking capacity,
and this infrastructure can then be shared efficiently by lots of applications and different business
areas. However, you can separate the individual components out if you wish, and run compute and
storage components on different clusters.
A Hyper-Converged option just uses a single cluster for compute and storage and networking.
Applications like Hyper-V virtual machines or SQL Server databases run directly on the servers
providing the storage. These components can’t be separated as the software-defined elements are
implemented virtually and integrated into the hypervisor environment.

Before you start implementing you need to work out a good naming standard for your servers, you
need to decide which domain you will use and which type of Network.The network must be RDMA,
either iWarp or RoCE. If you use RoCE you will need the version number and the model of your top-
of-rack switch.

Deploy the Windows Servers

INSTALL THE SERVER CODE

The first step is to install Windows Server 2016 Datacenter Edition on every server that will be in the
cluster. You would typically deploy and manage these servers from a separate management server,
which must be in the same domain as the cluster, and have RSAT and PowerShell modules installed
for Hyper-V and Failover Clustering.
The individual servers are initially configured with the local administrator account, \Administrator. To
manage Storage Spaces Direct, you'll need to join the servers to a domain and use an Active
Directory Domain Services domain account that is in the Administrators group on every server.
To do this, log onto your management server and open a PowerShell console with Administrator
privileges. Use Enter-PSSession to connect to each server and run the following cmdlet, but
substitute your own computer name, domain name, and domain credentials:

Add-Computer -NewName "Server01" -DomainName "domain.com" -Credential "DOMAIN\User"


-Restart -Force

If your company's policies allow it, add your storage administrator account to the local Administrators
group on each node with the following command

Net localgroup Administrators /add

INSTALL SERVER FEATURES

Now you need to install various server features on all the servers:

 Failover Clustering
 Hyper-V
 RSAT-Clustering-PowerShell
 Hyper-V-PowerShell

If you want to host any file shares, such as for a converged deployment, you also need to install File
Server, and you will need Data-Center-Bridging if you're using RoCEv2 instead of iWARP network
adapters.
To install these using PowerShell, use this command:

Install-WindowsFeature -Name "Hyper-V", "Failover-Clustering", "Data-Center-


Bridging", "RSAT-Clustering-PowerShell", "Hyper-V-PowerShell", "FS-FileServer"

Or you could use this little script to install the features on all the servers in the cluster as the same
time. You will need to modify the list of variables at the start of the script to fit your servers and
features.

# Fill in these variables with your values


$ServerList = "Server01", "Server02", "Server03", "Server04"
$FeatureList = "Hyper-V", "Failover-Clustering", "Data-Center-Bridging", "RSAT-
Clustering-PowerShell", "Hyper-V-PowerShell", "FS-FileServer"

# This part runs the Install-WindowsFeature cmdlet on all servers in $ServerList,


passing the list of features into the scriptblock with the "Using" scope modifier
so you don't have to hard-code them here.
Invoke-Command ($ServerList) {
Install-WindowsFeature -Name $Using:Featurelist
}

Configure the network

Not giving full details here as this will depend on exactly what type of network you use. Microsoft
recommends at speed of least 10 GbE and using remote direct memory access (RDMA). RDMA can
be either iWARP or RoCE but must be Windows Server 2016 certified. You may also need to do
some configuration of the top-of-rack switch.
Microsoft recommends that you use Switch-embedded teaming (SET) with Storage Spaces Direct.
SET was introduced with Windows Server 2016, within the Hyper-V virtual switch. SET reduces the
number of physical NIC ports required, because it allows the same physical NIC ports to be used for
all network traffic while using RDMA.

Configure Storage Spaces Direct

You must run these steps in a local PowerShell session on the management system, with
administrative permissions. Do not run them remotely.

EMPTY THE CLUSTER DRIVES

First you need to make sure the drives you are going to configure are clean and empty. Microsoft
provides a script to do this, which is reproduced below. Obviously, wiping disks is a high risk process,
so make sure you understand this process, and do not delete data that you need. It would be a good
idea to make sure you have a current backup before you begin.

Microsoft warning - This script will permanently remove any data on any drives other than the
operating system boot drive!
# Fill in these variables with your values
$ServerList = "Server01", "Server02", "Server03", "Server04"
 
Invoke-Command ($ServerList) {
   Update-StorageProviderCache
   Get-StoragePool | ? IsPrimordial -eq $false | Set-StoragePool -IsReadOnly:$false
-ErrorAction SilentlyContinue
   Get-StoragePool | ? IsPrimordial -eq $false | Get-VirtualDisk | Remove-
VirtualDisk -Confirm:$false -ErrorAction SilentlyContinue
   Get-StoragePool | ? IsPrimordial -eq $false | Remove-StoragePool -Confirm:$false
-ErrorAction SilentlyContinue
   Get-PhysicalDisk | Reset-PhysicalDisk -ErrorAction SilentlyContinue
   Get-Disk | ? Number -ne $null | ? IsBoot -ne $true | ? IsSystem -ne $true | ?
PartitionStyle -ne RAW | % {
      $_ | Set-Disk -isoffline:$false
     $_ | Set-Disk -isreadonly:$false
     $_ | Clear-Disk -RemoveData -RemoveOEM -Confirm:$false
     $_ | Set-Disk -isreadonly:$true
     $_ | Set-Disk -isoffline:$true
   }
    Get-Disk | Where Number -Ne $Null | Where IsBoot -Ne $True | Where IsSystem -Ne
$True | Where PartitionStyle -Eq RAW | Group -NoElement -Property FriendlyName } |
Sort -Property PsComputerName, Count
 
The output will look like this, where Count is the number of drives of each model
in each server:
 
Count Name PSComputerName
----- ---- --------------
4     ATA SSDSC2BA800G4n     Server01
10    ATA ST4000NM0033       Server01
4     ATA SSDSC2BA800G4n     Server02
10    ATA ST4000NM0033       Server02
4     ATA SSDSC2BA800G4n     Server03
10    ATA ST4000NM0033       Server03
4     ATA SSDSC2BA800G4n     Server04
10    ATA ST4000NM0033       Server04

VALIDATE THE CLUSTER

Now run the 'Test-Cluster' validation tool to make sure that the server nodes are configured correctly
and able to create a cluster using Storage Spaces Direct. This is a PowerShell command which you
run with an '-Include' parameter to run tests specific to a Storage Spaces Direct cluster. As always,
you need to substitute your own server names.

Test-Cluster –Node Server01, Server02, Server03, Server04 –Include "Storage Spaces


Direct", "Inventory", "Network", "System Configuration"

CREATE THE CLUSTER

Assuming the test above ran successfully, you can now create your cluster using the following
PowerShell cmdlet.

New-Cluster –Name ClusterName –Node Server01,Server02,Server03,Server04 –NoStorage

If your servers are using static IP addresses you will need to add an extra parameter to the command:
-StaticAddress xxx.xxx.xxx.xxx.
The ClusterName parameter should be a netbios name that is unique and 15 characters or less.
When creating the cluster, you'll get a warning that states - "There were issues while creating the
clustered role that may prevent it from starting. For more information, view the report file below." You
can safely ignore this warning. It's due to no disks being available for the cluster quorum.
Once the cluster is created, it can take quite a while for the DNS entry for the cluster name to be
replicated, depending on your environment and DNS replication configuration.

You should now configure a witness for the cluster. If you cluster has just 2 servers, then without a
witness if either server goes offline, the other server will become unavailable as well. With a witness,
clusters with three or more servers can withstand two servers failing or being offline. You can use
either a file share as a witness, or a cloud witness.

Now put the storage system into the Storage Spaces Direct mode with the following command, which
you should run as a PowerShell command with Administrator privileges from the management
system.

Enable-ClusterStorageSpacesDirect –CimSession ClusterName

If this command is run locally on one of the nodes, the -CimSession parameter is not necessary,
otherwise use the cluster name you specified above. Using the node name may be more reliable due
to DNS replication delays that may occur with the newly created cluster name. This command will
also:
Create a single large pool
If you have installed more than one drive type, it will configure the faster drives as the Storage Spaces
Direct caches, usually read and write.
Create two tiers as default tiers, one called 'Capacity' and the other called 'Performance'. The cmdlet
will analyse the available devices and configures each tier with a suitable combination of device types
and resiliency.

CREATE THE CLUSTER VOLUMES

When this command is finished, which may take several minutes, the system will be ready for
volumes to be created. You need to decide what type of volumes, protection, mirroring or parity,
cache as described above, then the easiest way to define the volumes is to use the Microsoft New-
Volume cmdlet. This single cmdlet automatically creates the virtual disk, partitions and formats it,
creates the volume with matching name, and adds it to cluster shared volumes. Check out the
Microsoft web help pages for details as how you use this command will depend on what types of
volumes you have available and how you want them to be configured.

INSTALL CSV CACHE

An optional extra is to enable cluster shared volume (CSV) cache, which uses system RAM memory
as a write-through block-level cache. This will cache read operations if they are not already cached by
the Windows cache manager and it can improve performance for Hyper-V and Scale-Out File Server.
The downside is that CSV takes system memory away from other applications such as VMs on a
hyper-converged cluster, so you need to get the right balance between storage performance and
application requirements.
You enable CSV cache with a PowerShell command on the management system, opened with an
account that has administrator permissions on the storage cluster. You will need to substitute your
own ClusterName and your appropriate cache size.

(Get-Cluster YourClusterName).BlockCacheSize = YourCacheSize #Size in MB

INSTALL VMS
If you are installing a hyper-converged cluster, you need to provision virtual machines on the Storage
Spaces Direct cluster. You should store the VMs on the systems CSV namespace, for example: c:\
ClusterStorage\Volume1, just like clustered VMs on failover clusters.

Create the Scale-Out File Server role

You only need to do this if you are deploying a converged solution, if you're deploying a hyper-
converged cluster, ignore this section.

Open a PowerShell session that's connected to the file server cluster, and run the following
commands to create the Scale-Out File Server role. You will need to put in your own cluster name,
and come up with your own name for the Scale-Out File Server role.

Add-ClusterScaleOutFileServerRole -Name YourSOFSName -Cluster YourClusterName

Now create file shares, one file share per CSV per virtual disk. Microsoft provides a Windows
PowerShell script to partially automate the deployment, check the Windows website for details.

back to top

Windows Storage

Infrastructure

 Storage Spaces Direct


 Windows Volume Mgmt.
 Windows File Systems
 Deduplication
 Volume Shadowcopy Services
 Storage Replica

Management

 Windows System state


 Storage QoS
 Data Classification
 Defragmentation

Removed Features

 Removable Storage System

Novell Netware
 File Systems
 Disks and Volumes
 NDS and eDirectory
 Clustered Servers
 iFolder
 Netware Volume Statistics

Lascon updTES

I retired 2 years ago, and so I'm out of touch with the latest in the data storage world. The Lascon site
has not been updated since July 2021, and probably will not get updated very much again. The site
hosting is paid up until early 2023 when it will almost certainly disappear.
Lascon Storage was conceived in 2000, and technology has changed massively over those 22 years.
It's been fun, but I guess it's time to call it a day. Thanks to all my readers in that time. I hope you
managed to find something useful in there.
All the best

HOME
Backups

CLOSE





Hardware

CLOSE





Mainframe

CLOSE





Windows

CLOSE



Databases

CLOSE




Strategy

CLOSE

Click on the grey buttons above to open an overlay menu that shows the areas in each major section.
Click on the yellow buttons to the right to move between pages in this area.

DISCLAIMER - By entering and using this site, you accept the conditions and limitations of use

Click here to see the Full Site Index       Click here to see the Cookie Policy       Click here to see
the Privacy Policy                             ©2019 and later, A.J.Armstrong

You might also like