Professional Documents
Culture Documents
Section 01 Introduction To Servers Course Guide
Section 01 Introduction To Servers Course Guide
SERVER CONCEPTS:
SECTION 01
INTRODUCTION TO
SERVERS
COURSE GUIDE
Dell PowerEdge Server Concepts: Section 01 Introduction to Servers
Server OS.................................................................................................................. 92
Overview ............................................................................................................................93
Operating Systems ............................................................................................................97
Hardware Compatibility List (HCL) ...................................................................................101
Installation and Deployment Methods .............................................................................. 102
Virtual Memory .................................................................................................................125
Virtual Media ....................................................................................................................127
Resources............................................................................................................... 129
Supporting Resources: Introduction to Servers ................................................................ 130
Certification Journey Map ................................................................................................131
At the end of this User Guide, the learner should be able to:
List and describe the different types of servers including form factors.
Identify storage servers and how they function, including the services they
provide.
Explain key RAID concepts such as the different RAID levels and how to
manage each.
Setup and configure servers to function within an IT infrastructure.
Server Introduction
Server Overview
Client A Request
Response
Request
Client B
Response
Server
What is a server?
The server receives a request from a client for data processing. The server then
sends the requested information back to the client that made the request.
All these features and more are covered in greater detail throughout this course.
The term server form factor describes the size, shape, and packaging of a
hardware device. The form factors of the Dell PowerEdge server portfolio range
from standard performance to high performance.
1 2 3
1: Tower servers: Tower servers contain multiple disk drives bays and expansion
card slots. The advantages of a tower server lie in its compact shape. Tower
servers are designed to be installed in a standard office space instead of a data
center. The tower server's simplicity and robustness make the server an ideal
choice for a small company.
into a specially designed chassis slot. Modular servers reduce the amount of rack
space consumed in a data center.
Examples of modular servers are the MX740c, MX840c, and MX750c. Each can be
inserted into the MX7000 chassis platform.
The web version of this content contains an interactive activity. Go to the on-
demand course to complete this activity.
The web version of this content contains an interactive activity. Go to the on-
demand course to complete this activity.
The web version of this content contains an interactive activity. Go to the on-
demand course to complete this activity.
The Dell PowerEdge rack servers will have a capital R for rack in the name.
Examples of PowerEdge rack serves are the Dell PowerEdge R650 and Dell
PowerEdge R750xa (accelerated rack server). Rack servers come in different sizes
and are measured in rack units (u). One rack unit (1u) is 1.75 inches or 4.445 cm in
height and servers are sold in 1u, 2u, 3u and 4u height. An example of a 1u server
is the R650 and 2u is an R750xa.
Dell PowerEdge MX7000 with MX750c sleds on the left and a MX840 sled on the right.
Dell PowerEdge modular solution enclosures (chassis) offer a flexible platform for
managing physical, virtual, and logical infrastructures. The modular servers work
inside the modular chassis and helps to eliminate resource silos and optimize data
center operations.
One example of a PowerEdge modular chassis is the MX7000 that can have 1u
modular servers (sleds) like the MX750c or 2 u servers like the MX840c. The lower
case c in the name indicates a compute server (sled).
Data Center Scalable Solutions (DSS) systems meet the specific needs of
customers. The customers can include web tech, telecommunication services
providers, hosting companies, research organizations, and oil and gas
organizations.
The Dell DSS systems focus on delivering tailored infrastructure to customers with
cloud-based architecture needs.
Rugged servers are industrial-grade OEM PowerEdge servers that can withstand
the extreme heat, dust, shock and vibration of factory floors, construction sites,
mobile command centers, Edge computing sites, and other extreme environments.
Software Defined
PowerEdge AX Ready
Connect remotely to Azure
Dell PowerEdge Ready Nodes, built on Dell PowerEdge servers, enable easy
deployment with factory installed, pre-configured and pre-tested configurations
which can add up to a solution that scales quickly to meet growing needs.
For example, the Dell Microsoft Storage Spaces Direct Ready Nodes (PowerEdge
AX servers) are used for the Dell Integrated System for Microsoft Azure Stack HCI
(hyperconverged infrastructure) solution. Dell Integrated System for Microsoft
Azure Stack HCI encompasses a wide range of Hyper-Converged Infrastructure
configurations built on Dell Microsoft Storage Spaces Direct (S2D). The virtualized
storage pool abstracted from the PowerEdge servers becomes the software-
defined server storage pool used by the Azure Stack HCI solution.
Another example is the Dell vSAN Ready Nodes used in VxRail or PowerFlex HCI
solutions. Dell vSAN Ready Nodes are built on Dell EMC PowerEdge servers that
have been pre-configured, tested and certified to run VMware vSAN. Each Ready
Node includes just the right amount of CPU, memory, network I/O controllers,
HDDs, and SSDs for VMware vSAN. Much like the Azure Stack HCI S2D solution,
VMware vSAN is Software Defined Storage (SDS) that leverages a distributed
control plane abstraction to create a pool of storage from disparate server-based
disk hardware. That abstraction is also comparable to the way the vSphere ESXi
hypervisor converts a cluster of server hardware into a pool of compute resources
(VMs).
Server Storage
Server
Storage
Disk
Storage Array
Direct Attached Storage (DAS) Network Attached Storage (NAS) Storage Area Network (SAN)
Storage solutions can be the native server storage capacity through the server
subsystem components such as HDDs, SSDs, and other components. Storage
solutions can also be the extended storage in the form of Direct Attached Storage
(DAS), Network Attached Storage (NAS), Storage Area Networks (SAN), and
Cloud storage (Storage as a Service).
Storage servers used in large scale storage solutions are different than other
servers in terms of cost, performance, size, and storage space, based on what is
needed by the organization. The large-capacity storage for storage servers ranges
from hundreds of terabytes to petabytes of data.
Both the native server storage and extended storage solutions are explored in
more detail in this section.
Extended storage solutions are required to store, protect, and save large amounts
of raw data and information.
Direct Attached Storage (DAS) is a storage solution where the server or computer
is directly attached to the storage. DAS attaches to a server host bus adapter
(HBA) that has a direct connection to a storage device. The storage device uses a
storage controller to connect to the HBA. The HBA connection to the storage or
storage enclosure uses a block-level protocol.
Client
storage controller
connection
SAS/ISCSI/FC Protocol
Database Server
Client
storage controller
connection
Advantages
• Minimum hardware cost
• Setup is made easy
• Management is easy for small environments
Disadvantages
• Limited scalability
• Potential single points of failure at each server
• Difficult to manage for larger environments
Network Attached Storage (NAS) servers attach directly to a LAN. NAS are also
called appliances, as data sharing among network clients are enabled. A NAS
device is file level storage.
NFS or CIFS
Appliances
Application Server
Client
File/Print Server
Client
NFS or CIFS
Workstations
Advantages
• Efficient file storage and management
• Added to existing LANs and can co-exist with SANs
• Simplified management
Disadvantages
• Not well suited to applications that require block-level storage
• Limited scalability
• Relies on TCP/IP networks
Application Servers
Client
Database Servers
Client Local Area Storage Area
Network (LAN) Network (SAN)
File/Print Servers
FC or ISCSI
Client Storage Array
Workstations Servers
Advantages
• Exceptional performance
• Extremely fault tolerant and highly reliable
• Highly scalable
• Centralized management
• Utilizes separate network for storage - can reduce load on LAN
• Shared access to storage pools, backup, restore, and Disaster Recovery (DR)
services
Disadvantages
• Higher initial cost
• More complex to deploy
• Vendor-specific technology
• Fibre Channel technology skills are required
Cloud Storage
Workstations with
Client
internet connections
Client Client
Client
Cloud-based
Retrieve data Storage Server
Client Store data
Internet Protocol
Share data
Workstation with an
internet connection
Client
Workstation with
an internet
i
Cloud storage (Storage as a Service) requires remote storage severs that provide a
cloud-based storage service to client workstations. The storage service is provided
to any workstation in any geo-location within the organization as long as the client
workstation has internet access rights to the remote storage server.
Client workstations can store, share, and retrieve media files (data) through the
cloud both through an on-prem cloud solution or through a public cloud service like
Microsoft One-Drive.
Components in high-end storage devices vary, depending on the use case of the
device.
SAS cable
Serial attached SCSI (SAS) is a protocol that is used for accessing the system
storage device. SAS 2 allows digital transfer by the cables, employing 1 bit at a time
for the transfer.
2 SAS 4.0 enables double the data transfer rates for SAS devices, up to 22.5 gb/s.
Host receptacle
connector
Interface connector
Power connector
Molex to SATA
adapter cable
Features of benefits:
• SATA 3 has a top speed of 6Gbs transfer rate.
• Cable management is made easy with support for extra cable length.
• SATA has one drive per cable connector.
A solid-state drive 4 is a nonvolatile storage device, which stores the data into solid-
state flash memory. Solid-state drives are not traditional hard drives, as there are
no moving parts 5.
A system can continue operating when a failure or faults occur in some system
components. This system property is fault tolerance. SSDs are known for their
fault-tolerant architecture.
4 An SSD is also known as a solid-state disk as it does not use any magnetic or
optical storage media.
5 SSDs have an array of semiconductor memory which is organized as a disk drive
Storage
Send Data
Client
Retrieve Data
12x NVMe Drives 12x NVMe Drives
Workstation
NVMe offers higher performance in comparison to legacy SAS and SATA. NVMe
not only accelerates existing applications but also enables new applications and
capabilities for real-time workload processing in the datacenter.
PCI connection to
server system board
Users can use Boot Optimized Storage Solution (BOSS) for virtualization and HCI
solutions. BOSS uses one or two read-intensive (Boot Class) 80 mm M.2 SATA
Solid-State Devices (SSDs). BOSS can be used in “pass-through” or two devices in
hardware RAID 1 (mirroring). The M.2 devices are Read-Intensive.
NVDIMM Controller
Storage Server
DRAM NAND
Multiplexer
12x NVMe Drives 12x NVMe Drives
Backup
power
source Drives mapped to CPU1 Drives mapped to CPU2
The NVDIMM is
inserted into the
server system board
memory slot
1 2
6When a server powers off, the information within the NVDIMM is automatically
secured to the flash chip.
• NVDIMM offers persistence and reliability with the addition of the nonvolatile
feature to DRAM.
• NVDIMM multiplexers (MUX) isolate the host controller from the DRAM memory
during 'save' and 'restore' operations.
• The operating system uses NVDIMMs as a data storage device for speed.
• NVDIMM supports non-RAID migration.
Starting in 15G, Dell PowerEdge servers use Intel® Optane™ Persistent Memory
(Barlow Pass) technology. The Intel® Optane™ DC Persistent Memory (Barlow
Pass) technology allows assigned applications to retain data during a power loss,
system shutdown, or system errors. Barlow Pass (BPS) uses persistent memory as
storage, rather than traditional memory. Also, Barlow Pass has a massive memory
capacity that allows more data to pass through the memory bus. Ultimately, Dell
PowerEdge integrates the Intel® Optane™ Datacenter (DC) Persistent Memory
Module (DCPMM) to bridge functionality between the traditional memory and
storage.
DRAM
Memory Bus
Intel Optane
DC Persistent
Memory
Intel Optane
DC SSDs
PCIe Bus
NVMe
SSD
SAS/
SATA
HDD
Performance pyramid showing where DCPMM sits between NVMe and DRAM.
DCPMM is a system acceleration solution for the 7th generation and 8th generation
Intel® Core™ Processor platforms. The DCPMM solution comes in a module
(hardware) format and by placing this new memory media between the processor
and a slower SATA-based storage device (HDD, SSH, or SATA SSD),
administrators can store commonly used data and programs closer to the
processor, allowing the systems to access this information more quickly and
improves overall systems performance.
• Persistence capabilities 7
• Memory capacity 8
• Interface speed 9
DCPMM positions itself as a unique intermediate layer between DRAM and NVMe
on the performance pyramid.
To learn more about Intel® Optane™, visit the Intel® Optane™ Technology page
on www.intel.com or go to the www.dell.com/support site and search for the
Optane Memory Module - Frequently Asked Questions knowledge-based article.
9 Interface speeds are as swift as DRAM with only a slight performance reduction.
The IDSDM provides two SD card slots that are dedicated for an embedded
hypervisor.
The IDSDM provides redundancy 10 on the hypervisor SD card. If two new SD cards
are installed on the IDSDM, one of the cards is active (SD1) while the other card is
on standby (SD2). The data is written on both the cards, but the data is read from
SD1. If SD1 fails or is removed, then SD2 automatically becomes active. This
feature is available on Dell PowerEdge models such as the R740 or R940.
IDSDM card
PCIe Write protection
dip-switches
10When a SD card mirrors the first SD card content by using another SD card.
Either of the two SD cards can be considered as the master card.
• High performance
• Provides redundancy
• High data protection
LED Display
Front Panel
Tape Drives
Power On Button
SAS Connector SAS Connector
Rear Panel
Power Connector
Fan Enclosure
External PowerVault Standalone Linear Tape Open front and rear views.
Linear Tape Open (LTO) is an optional storage solution for long-term and archival
data storage. Dell PowerVault LTO is a magnetic tape drive, which provides
business continuity and disaster recovery. Dell PowerVault Tape drives can be
internal (tape drive) or external (tape library or media library) to a PowerEdge
server.
The image on the right is an external Dell PowerVault LTO standalone dual tape
drive rack mount.
Features:
• Healthcare imaging
• Media and entertainment
• Video surveillance
• Geophysical (oil and gas) data
• Computational analysis, such as genome mapping and event simulations
PowerVault LTO is covered in more detail in the Server Backup topic in this course.
Storage resource
management and capacity Storage
planning software
administrator
Current Future
(Management) (Planning)
Storage array Storage array Storage array Storage array Storage array
1 2 3 4 5
RAID Concepts
What is RAID?
Virtual Disk
Data Blocks
Block 1
Block 2
Block 3
Disk 1
Data Abstracted
Physical Disk
The RAID disk group appears to the host system either as a single storage unit or
multiple logical units of data blocks. The data blocks are abstracted from the
server's physical disks. RAID disks provide high performance by increasing the
number of disk drives that are used for saving and accessing data. A RAID disk
subsystem improves I/O performance and data availability. RAID data throughput
improves server performance because several disks are accessed simultaneously.
RAID systems also improve data storage availability and fault tolerance. Data loss
that is caused by a hard drive failure can be recovered by rebuilding missing data
from the remaining physical disks containing data or parity.
Heat
Dell PERC H740P Mini Sink
Integrated RAID Controller
Battery Connect to
Connect to the
System Board the Expansion
Adapter Riser
Heat
Sink
A Dell PowerEdge RAID Controller (Dell PERC) is a series of RAID, disk array
controllers made by Dell for PowerEdge server computers. The controllers support
SAS and SATA physical hard disk drives (HDDs), solid state drives (SSDs), and
nonvolatile memory express (NVMe) SSD drives. A PERC can be integrated to the
PowerEdge server system board or connected to the system board through the
PCIe expansion riser slot.
RAID 0 RAID 1
Striping Mirroring
RAID 5 RAID 6
RAID 10 RAID 60
RAID 50
Multiple mirrored Combination RAID 6 and
Multiple RAID 5 sets RAID 0 with striping
drives with striping
with striping
A RAID level refers to how the data is distributed across the drives depending on
the required level of redundancy and performance. The different schemas, or data
distribution layouts, are named by the word RAID followed by a number. for
example, RAID 0 or RAID 1.
Each schema, or RAID level, provides a different balance to the key goals of
reliability, availability, performance, and capacity. RAID levels greater than RAID 0
provide protection against unrecoverable sector read errors and against failures of
whole physical drives.
Important Definitions
RAID levels perform different functions based on how they are used. It is necessary
to understand how each function works.
Disk Striping
RAID
Disk Striping divides physical disk data into data blocks and places the new data
blocks across multiple virtualized storage disks. A striped unit is the data block
slices that came from one physical disk drive.
RAID 0 applies disk striping. RAID 5 applies disk striping with parity.
Redundancy
Block 3
Disk 1 Disk 2
Data
Disk Mirroring
Block 1 Block 1
The disk data is
virtualized and data
block 1 is mirrored on
drive 2.
Disk 1 Disk 2
RAID
In data storage terms, disk mirroring is the duplication of data onto separate drives
to protect a system from data loss due to failures. In RAID, the same concept is
used to mirror virtualized data into different drives.
Parity
D0 D1 D1 P0-2
D3 D4 P3-5 D5
D6 P6-8 D7 D8
P9-11 D9 D1 D1
RAID 5 example - parity data blocks striped across four disks from 11 data block sources.
A parity drive is associated with RAID 3 or 4. Parity data is used to provide fault
tolerance by calculating the data from at least two disks. That data is then stored
on an additional disk. Should any of the disks fail, the lost data from that disk can
be calculated by the data that remains on the other two disks.
When a system continues operating even when system components fail, the
system is known as fault tolerant. RAID participates in system data fault tolerance
through implemented parity storage.
RAID levels that use parity are RAID 5 (single distributed parity), RAID 6 (dual
distributed parity), RAID 50 (striped and single parity), and RAID 60 (striped and
dual parity).
RAID 0
1 2 3 4
Pros:
Cons:
3: Usable space:
RAID 0 has no fault tolerance or redundancy so all the drive space can be used.
RAID 0 is the only RAID level with zero overhead. Overhead is a term that refers to
the cost of implementing RAID measured in usable space. RAID 0 does not
dedicate any space to redundancy so its overhead is zero.
RAID 0 can have a minimum 2 drives for usable storage space. The calculation is (
n + n = capacity ) with “n” being drive size. Simple example: ( 400GB + 400GB ) =
800GB of usable space
4: Use case:
Striping is used for applications that require high-speed data access and maximum
storage capacity but do not require data redundancy.
Examples include audio, video streaming and editing, web servers, gaming, and
graphic design. The user of the system views the combination of drives as only one
drive. The amount of usable drive space is the equivalent of the combined space of
all the drives in the array.
RAID 1
1 2 3 4
1: RAID 1 is also known as disk mirroring. It is the simplest form of replicating the
data into two or more disks. If one drive fails, data requests are directed to the
disk's counterpart, allowing normal access to data without interruption. After the
defective disk is replaced, data from the surviving member of the mirror is rebuilt
onto the new disk.
Pros:
Cons:
3: The drawback for the RAID 1 is the overhead. Only 50% of the disk space is
usable as the other half is used for mirroring.
4: Mirroring is used with applications that require redundancy and faster read rates
or with entry-level systems that require redundancy but only have two drives
available.
RAID 5
1 2 3 4
1: RAID 5 provides data redundancy by using data striping with parity information.
Rather than dedicating a drive to parity, the parity information instead is striped
across all disks in the array. RAID 5 requires a minimum of three drives to
implement. This RAID level delivers a high read rate when data is accessed in
large chunks.
Pros:
Cons:
3: Usable space:
Example: Using the minimum three disk drives configuration in RAID 5, if each
drive is a 400GB disk drive, then the single logical RAID disk has a total usable
storage capacity of 800GB storage. Adding more physical drives does not provide
more allowance for drive failures. Additional drives results in higher capacity but
does not add additional redundancy.
A RAID 5 with 20 drives can only sustain the loss of one drive the same as RAID 5
with three drives. However, adding more drives reduces overhead cost.
4: Use case:
RAID 5 is the most common RAID configuration in use. RAID 5's versatility makes
it useful for general-purpose multi-user systems. RAID 5 provides solid efficiency,
versatility, and cost balance.
RAID 6
1 2 3 4
1: RAID 6 provides data redundancy by using data striping with parity information.
RAID 6 requires dedicated two drives worth of space. In addition, RAID 6
introduces a new state: partially degraded. When a single drive fails within the
virtual disk, there is still redundancy, so it is considered partially degraded.
Pros:
Cons:
• Loss of two drives require two rebuilds, one for each drive lost and replaced.
• The lost drives cannot be rebuilt together simultaneously.
• RAID 6 requires a minimum of four drives to create.
Example: Combining four 400GB drives creates a single logical drive with a total
usable capacity of 800GB. Adding more drives does not provide more allowance
for drive failures but does improve overhead costs. Additional drives results in
higher capacity but does not add additional redundancy.
4: The use of RAID 6 is recommended when using large capacity drives, due the
risk of a second drive failure in RAID 5. RAID 6 is similar to RAID 5 and comes
with additional drive redundancy (dual distribution) compared to a RAID 5 single
distributed setup. However, the second parity calculation results in a decrease in
write performance. RAID 6 provides a more instantaneous level of fault tolerance
compared to RAID 5, with an additional disk to spare.
RAID 10
Some RAID levels are combined to produce a two-digit RAID level. RAID 10 is a
combination of levels 1 (mirroring) and 0 (striping). RAID 10 is also identified as
RAID 1 + 0.
1 2 3 4
Data is first striped across multiple drives, then the complete array of drives are
mirrored onto the other set of drives. RAID 10 can be considered striped mirrors.
Pros:
• RAID 10 has the same level of fault tolerance as RAID 1 in each mirrored set.
• RAID 10 can stay operational - even with multiple failures, as long as there is
only one failure mirrored set.
Cons:
3: RAID 10 configuration requires a minimum of four hard drives - two drives per
mirrored set, for any number of mirrored sets. The overhead for RAID 10 is equal to
that of RAID 1. This RAID level can sustain multiple failed drives, as long as both
drives of a mirrored set do not fail simultaneously. While RAID 10 can recover from
a single failure in each set - if both drives in one set fail, the array cannot be
recovered. Note: Additional drives must be added in pairs. Each pair will create 1
additional RAID 1 element.
4: RAID 10 is often used when the importance of having additional copies of critical
data outweighs the cost of additional drives. RAID 10 is not only used for additional
copies, but it also allows for more than one disk failure simultaneously - provided
each disk failure occurs in a different mirrored set.
Example: If a RAID 10 array was configured using four 200GB hard drives, the
total available space would be the total drive space (800GB) minus the capacity of
two complete drives (400GB) for a total of 400GB available space.
RAID 50
1 2 3 4
1: RAID 50 combines multiple RAID 5 sets with data striping (RAID 0). This
increases reliability and performance over standard RAID 5 that can accommodate
a multiple drive failure.
RAID 50 requires at least three drives per RAID 5 set. One drive failure per RAID 5
set can be tolerated.
However, if two drives fail in the same RAID 5 subset, the array cannot be
recovered.
Pros:
Cons:
3: Usable space:
RAID 50 requires the capacity of one drive per RAID 5 subset for parity information.
Theoretically RAID 50 can be expanded. However, it must be done in increments of
RAID 5.
Example: In a RAID 50 array of six 400GB hard drives, the total available space
would be 1600GB (800GB in each RAID 5 set).
4: With RAID 50, the RAID levels are nested within one another to provide the
benefits of both. RAID 50 is typically used to provide a balance between
performance, reliability, and cost. It is faster than a typical RAID 5, but expands
fault tolerance.
Added to this balance is the additional storage space. With RAID 10, the
configuration gives up half of all available space for the mirror.
RAID 50 helps to balance out efficient use of the available space. Like RAID 5,
space efficiency increases as more drives, beyond the minimum, are added.
RAID 60
1 2 3 4
1: Definition:
The array can sustain the loss of two drives per RAID 6 set. RAID 60 provided a
higher degree of fault tolerance compared to RAID 50, since two disks per subset
may fail without a data loss.
However, if three drives fail in the same RAID 6 subset, the array cannot be
recovered.
Pros:
Cons:
3: Usable space:
RAID 60 requires the capacity of two drives per RAID 6 subset for parity
information. Theoretically RAID 60 can be expanded. However, it must be done in
increments of RAID 6.
Example: In a RAID 60 array of eight 400GB hard drives, the total available space
would be a total of 1600GB. As with RAID 50, the more drives added to each RAID
6 set, the better the overhead cost ratio.
4: Use case:
RAID 60 takes the same nested approach as RAID 50. Like RAID 50, RAID 60 is
used for a balanced approach to performance, fault tolerance, and storage space.
Identifying what RAID level configuration is best includes considering the main
purpose of the server and deciding what factors are essential
• Performance
• Reliability
• Speed
• Capacity
Software RAID
Software RAID uses the built-in functionality of an operating system and does not
require any additional equipment to connect to different devices. The software
RAID depends on system resources (processor and memory), operating system,
and the RAID application.
Examples of software RAID are the Dell PERC S140 and S150 controllers. The
S140 and S150 controllers support up to 30 nonvolatile memory express (NVMe)
PCIe SSDs, SATA SSDs, and SATA HDDs depending on the system backplane
configuration.
• For more information about the PowerEdge RAID Controller S140 supported
OS and management applications, go to www.dell.com/support and search for
"Dell PowerEdge RAID Controller S140 User’s Guide".
• For more information abut the PowerEdge RAID Controller S150 supported OS
and management applications, go to www.dell.com/support and search for "Dell
PowerEdge RAID Controller S150 User's Guide".
Hardware RAID
Example of the hardware storage configuration of a RAID controller using the Lifecycle Controller.
Hardware RAID is a method where the drives are connected to a hardware RAID
controller that is built into a system board, a different server, or a separate RAID
card. The Hardware RAID is independent of system resources and operating
system.
A hardware RAID solution has its own Software RAID runs the RAID task on
processor and memory, improving the system's CPU.
overall IO performance.
The hardware RAID component does not Software RAID has data integrity issues
have any data integrity issues. due to system crashes.
Hardware RAID is more expensive than Software RAID is low cost - the only
Software RAID. cost are the additional disk drives.
Configuring RAID
To manage RAID in a server, administrators configure the RAID levels and virtual
disks for the storage controllers. Dell PERC controllers work with the different Dell
PowerEdge servers. A Dell PERC is either integrated into the server system board
or added as an extension card adapter. To accomplish the RAID level and virtual
disk configuration for the PERC controller, administrators use different applications.
The server administrator boots the server hardware and then presses F2 to access
the System Setup Utility. The Setup interface provides a System Setup Main Menu.
Now the administrator is ready to configure RAID for the server.
Example of a R740 server with a PERC H740P adapter ready for configuration. The screen displays
the selected RAID level.
1. Select the desired PERC controller from the Device Settings screen. In the case
of the screen capture example, the PERC controller in slot 6, PERC H740P is
selected.
2. Select Configuration Management in the RAID Controller Main Menu.
3. Select Create Virtual Disk from the main menu.
4. Select the RAID level, in the case of the screen capture, the administrator
selects RAID 5.
5. Select the Physical Disks for the controller.
6. Create the Virtual Disks.
Lifecycle Controller launch from iDRAC or from the LCC login screen.
The Integrated Dell Remote Access Controller (iDRAC) and the Lifecycle Controller
(LCC) work together to provide Out-of-Band server management. To configure
RAID with the LCC, an administrator either logs into the LCC independently or
launches the LCC from the iDRAC Virtual Console. The LCC Home screen menu
provides the Configure RAID option.
The Lifecycle Controller OS deployment screen with the message to configure RAID first.
The Lifecycle Controller provides OS deployment for a server. However, the first
step in the five-step process to deploy an OS is to configure RAID. If the RAID is
already configured, the first step can be skipped.
Using iDRAC, administrators can perform most of the functions that are available in
OpenManage Storage Management (OMSA) including real-time (no reboot)
configuration commands.
Configure
iDRAC
• System Setup 11
• Dell Lifecycle Controller 12
• Boot Manager13
• Preboot Execution Environment (PXE) 14
• System BIOS 15
• iDRAC Settings 16
• Device Settings 17
iDRAC settings are set up and configured using the United Extensible Firmware
Interface (UEFI).
17 Enables administrators to configure devices settings such as an integrated
There are different ways to access the System Setup for a PowerEdge server.
1 2 3
1:
Press <F2> to directly access the System Setup or press <F11> to launch the Boot
Manager. On the Boot Manager screen, select Boot Manager -> Launch System
Setup.
2:
For the iDRAC remote users, System Setup is initiated in the next reboot by
selecting the Next Boot drop down list of the virtual console.
3:
Users launch the System Setup by selecting the System Setup tab of the Lifecycle
Controller.
BIOS
Configure
BIOS boot
Boot
BIOS Master Operating
BIOS
boot Boot System
Power Kernel
Basic Input Output System (BIOS) is a set of system instructions that resides on a
Read-Only Memory (ROM) chip on the system board.
CMOS
The BIOS code starts and reads all the contents of CMOS to understand the
system configuration.
As the CMOS is a RAM chip, the CMOS settings are erased from the memory
when the system is shut down. However, a CMOS battery (typically a CR2032 coin
cell battery) is used to provide constant power to the chip.
UEFI
Configure
UEFI boot
The Unified Extensible Firmware Interface (UEFI) is the evolution of BIOS and
meets modern computing needs.
Users set up the boot mode in a typical server by selecting the BIOS or the UEFI
boot mode.
The table below shows the differences between BIOS and UEFI boot modes.
BIOS UEFI
BIOS offers 32-bit addressing and 512- The GPT scheme uses 64-bit
byte blocks. addressing.
The MBR scheme limits the addressable The boot media can be larger than 2
storage in the boot media to 2 TB. TB. UEFI does not use the MBR
scheme.
The Boot Mode setting enables the system to boot in the traditional BIOS mode or
in UEFI mode.
Step 1
Launch System Setup. Click System BIOS -> Boot Settings and set Boot Mode
to UEFI. Click the back button and then select Finish to reboot the server.
Navigate back to the System Setup.
Step 2
When the system Boot Mode is set to UEFI, the BIOS provides a list of available
UEFI boot options. Click UEFI Boot Settings, then select UEFI boot Sequence to
edit the boot order.
Step 3
In UEFI boot mode, PXE settings are configured by navigating to System Setup ->
System BIOS -> Network Settings -> PXE Device Settings. The Network
Settings option in the System BIOS menu is available only in UEFI boot mode.
Step 4
Configure the UEFI HTTP Device settings by navigating to System Setup ->
System BIOS -> Network Settings -> HTTP Device Settings. The settings are
similar to PXE settings with the addition of the URI setting, which specifies the
location of the bootstrap program.
Step 5
Navigate to System Setup -> System BIOS -> Network Settings-> iSCSI Device
Settings to access the Connection settings.
Step 6
Set up the UEFI iSCSI boot configuration under System Setup -> System BIOS ->
Network Settings -> iSCSI Device Settings -> Connection Settings to complete
the UEFI configuration.
iDRAC Configuration
To review the tasks and steps involved in completing the iDRAC Configuration
simulation job aid, download the job aid document from the on-demand resources
section. Or click the iDRAC Configuration Job Aid link to review the tasks and steps
online.
Device Settings
To review the tasks and steps involved in completing the Device Settings
simulation job aid, download the job aid document from the on-demand resources
section. Or click the Device Settings Job Aid link to review the tasks and steps
online.
Server OS
Overview
A server is typically a more powerful system than an average desktop. Servers are
built to handle heavier workloads and more applications taking advantage of the
specific hardware to increase productivity and reduce downtime. A well-planned
operating system installation can provide a seamless deployment and result in
strong customer satisfaction.
Uses
• Examples:
File/Print Server
• Does the user have need for email services? Examples include Exchange or
Postfix.
• Will the server file and print services?
• Does the user need domain services such as Active Directory or LDAP for
Linux?
Sizing
The key to sizing is workload. Metrics such as CPU utilization, RAM usage,
estimated disk space growth, backup methods, and backup media. Without these
elements, you cannot adequately size the server. When building servers users
should consider the following:
• Examples:
Information
Router
Gateway
RAID
Management Network
• Examples:
Operating Systems
An operating system is software that manages the hardware resources that are
associated with your desktop or laptop. The operating system manages the
communication between your software and your hardware.
Role Role
Depending on the planned usage of the server, administrators can also install the
Microsoft Hyper-V role from the Microsoft Server OS. Hyper-V allows the user to
create and run a software version of a system, called a virtual machine (VM). Each
VM acts like a complete system, including an operating system and programs. VMs
provide more flexibility, help save time and money, and are a more efficient way to
use hardware than running one operating system on a physical hardware.
Hyper-V runs each VM in its own isolated space, which means you can run multiple
VMs on the same hardware simultaneously. Multiple VMs can avoid problems such
as a crash affecting the other workloads, or to give different people, groups, or
services access to different systems.
By consolidating multiple servers onto fewer physical devices, ESXi reduces space,
power, and IT administrative requirements while driving high-speed performance.
Manage current and legacy applications. With a maximum of 768 Virtual CPU's and
24 TB of RAM, ESXi provides capability for any size of workload.
Red Hat Enterprise Linux (RHEL) is an operating system based on Linux. RHEL is
an enterprise product that certifies both on hundreds of clouds and with thousands
of vendors. A Linux operating system also powers Android systems.
The latest release of Red Hat Enterprise Linux adds support for Microsoft SQL
Server, virtual private networks (VPNs), and email through Postfix.
SUSE Linux Enterprise Server (SLES) is a Linux adaptable server platform that is
easy-to-manage. SLES server platforms deploy business-critical workloads on-
premises, in the cloud, and at the edge.
Ubuntu
Ubuntu is a Linux operating system that is available for free to the IT community
and provides professional support. The Ubuntu user community adheres to the
ideas in the Ubuntu manifesto 19.
One major difference between the desktop edition and server is the graphical
environment. The desktop edition provides graphical applications, utilities, and the
GNOME desktop. The server edition was created for the data center and is entirely
text-based interface.
• Windows Server
− https://docs.microsoft.com/en-us/windows-
hardware/drivers/dashboard/windows-certified-products-list
• VMware ESXi
− https://www.vmware.com/resources/compatibility/search.php
• Red hat Enterprise Linux
− https://catalog.redhat.com/hardware
• SuSE
− https://www.suse.com/yessearch/
• Ubuntu/Canonical
− https://ubuntu.com/certified
Lifecycle Controller
The first common method is using the Lifecycle Controller for the supported
operating system.
1. Press F10 during boot to enter the Lifecycle Controller (LCC).
Manual Installation
1. Insert the Operating system installation media and press F11 for the BIOS
Boot Manager.
2. If using the iDRAC Virtual Media, select the Virtual Optical drive.
4. Once the operating system installation is complete, check for missing drivers.
Driver Installation
Correct drivers are required for the server to function. After the manual installation
administrators must install appropriate drivers. The commonly used steps are:
2. Click Install.
Virtual Memory
PowerEdge Server
Allocated memory
Operating
RAM
Translate
Virtual Physical
Address Address
Exchange
Unused files are
Virtual memory management transferred from RAM
to the virtual memory.
Virtual
Disk memory
The use of virtual memory provides a storage allocation of unused files for the
server.
However, the entire storage memory is not available for use as virtual memory. The
size of virtual storage is limited by the amount of secondary memory20 available,
which could be in Gigabytes of memory.
Virtual memory also maps the virtual address into the physical address and plays a
role in the servers OS installation and deployment.
Virtual memory provides:
• Intrinsic memory protection 21
• Code reuse through shared libraries22
• Reduced program load time 23
• Zero-copy OS operations 24
• Connection of process address spaces to pages 25
operating system controls the access rights to the memory that is not allocated for
process use.
22 Code reuse is when the use of reliable existing code is implemented for new
and moves the program into the system memory for processing.
24 Zero-copy is when the CPU does not copy data from one memory area to
disk. Pages are the data files moved between the RAM and the hard disk.
Virtual Media
To review the tasks and steps involved in completing the Virtual Media simulation
job aid, download the job aid document from the on-demand resources section. Or
click the Virtual Media Job Aid link to review the task and steps online
The below training topics support the concepts and features that are discussed in
this training. Click the provided links for more information.
• Dell EMC PowerEdge RAID Controller S140 User’s Guide
− The Dell EMC PowerEdge RAID Controller (PERC) S140 is a software RAID
solution for the Dell EMC PowerEdge systems.
• Dell EMC PowerEdge RAID Controller S150 User’s Guide
− The Dell EMC PowerEdge RAID Controller (PERC) S150 is a Software
RAID solution for the Dell EMC PowerEdge systems.
• Dell EMC Enterprise Operating Systems
− Dell EMC collaborates extensively with Microsoft and Linux to ensure the
consistent, reliable performance of Microsoft and Linux (RedHat, SuSE, and
Ubuntu) operating systems running on Dell EMC PowerEdge servers.
PowerEdge
(C) - Classroom
(VC) - Virtual Classroom
(ODC) - On Demand Course
2S
Two socket form factor. Used to identify the family of servers. PowerEdge servers
can have 1S, 2S, or 4S. See the PowerEdge rack server portfolio page for details.
AI
Artificial Intelligence (AI) is the designing and building of intelligent agents that
receive precepts from the environment and act to affect that environment.
BOSS
Dell Technologies boots optimized storage solution. RAID solution card that is
designed for booting a server's operating system.
BOSS
Dell Technologies boots optimized storage solution. RAID solution card that is
designed for booting a server's operating system.
DIMM
Direct Access Inline Memory Module. DIMMs are available in varying capacities. All
DIMMs in a cache must have the same capacity
DL
Deep Learning (DL) is a form of Machine Learning which uses Artificial Neural
Networks.
DRAM
x4, x8, and x16 DIMMs refers to the width of the DRAM components on a memory
module. x4 DIMMs use DRAM components that have a 4-bit data width. x8 DIMMs
use components with an 8-bit data width. x16 DIMMs use components with a 16-bit
data width.
HCI
Hyper Converged infrastructure (HCI) combines compute, virtualization, storage,
and networking in a single cluster.
HPC
High performance computing (HPC) is the ability to process data and perform
complex calculations at high speeds.
HW RAID
Form of RAID. The motherboard or a separate RAID card handles the processing.
HW RAID
Form of RAID. The motherboard or a separate RAID card handles the processing.
iDRAC
The Integrated Dell Remote Access Controller (iDRAC) is designed for secure local
and remote server management and helps IT administrators deploy, update, and
monitor PowerEdge servers.
IDSDM
Redundant SD-card module for embedded hypervisors. PowerEdge servers can
boot to the hypervisor out-of-the-box. The embedded hypervisor is mirrored across
dual SD cards using an integrated hardware controller.
IEEE 802.3
The Electrical and Electronics Engineers (IEEE) 802.3 is a collection of IEEE
standards. The working group defining the physical layer and Media Access Control
(MAC) of Data Link Layer in the Ethernet set the standards.
IoT
The Internet of things (IoT) describes the network of physical objects such as
sensors, software, and other technologies for the purpose of connecting and
exchanging data with other devices and systems over the Internet. (Wikipedia)
LRDIMM
Load-Reduced DIMM. Has higher densities than RDIMMs. Uses a memory buffer
chip to reduce the load on the server memory bus.
ML
Machine Learning (ML) is an application of AI where systems use data to learn how
to respond, rather than being explicitly programmed.
MT/s
Mega-Transfers per Second (MT/s). Measurement of bus and channel speed in
millions of cycles per second.
Multicasting
Multicasting involves sending the same message to many endpoints such as in a
video conferencing facility.
NAND
NAND is a non-volatile memory designed to retain stored data even when powered
off.
NVDIMM
Non-Volatile DIMM
NVMe
Non-Volatile Memory Express (NVMe). Communications interface for PCIe-based
SSDs. Used to increase efficiency and performance.
OCP
Open Compute Project (OCP) is an organization that shares designs of data center
products and best practices among companies. OCP designs and projects include
server designs, data storage, rack designs, and open networking switches. Read
more information about the organization by going to www.opencompute.org.
PCH
Platform controller hub (PCH) controls certain data paths and support functions
used in conjunction with Intel CPUs.
PCIe
Peripheral component interconnect express (PCIe) is an interface standard for
connecting high-speed components.
PERC
PowerEdge RAID Controller (PERC). Family of controllers that enhance
performance, increase reliability, add fault tolerance, and simplifies management.
PERC
PowerEdge RAID Controller (PERC). Family of controllers that enhance
performance, increase reliability, add fault tolerance, and simplifies management.
RAID
Redundant Arrays of Independent Disks (RAID). RAID controllers combine multiple
server physical hard drives together into a virtual drive or multiple drives to improve
data efficiency and protection.
RAID
Redundant Arrays of Independent Disks (RAID). RAID controllers combine multiple
server physical hard drives together into a virtual drive or multiple drives to improve
data efficiency and protection.
SAS
SAS (serial-attached SCSI) is a type of SCSI that uses serial signals to transfer
data, instructions, and information. SAS drives are dual ported.
SAS
SAS (serial-attached SCSI) is a type of SCSI that uses serial signals to transfer
data, instructions, and information. SAS drives are dual ported.
SATA
SATA (Serial Advanced Technology Attachment) uses serial signals to transfer
data, instructions, and information. SATA drives have only a single port.
SATA
SATA (Serial Advanced Technology Attachment) uses serial signals to transfer
data, instructions, and information. SATA drives have only a single port.
SDS
Storage data services such as APEX Data Storage Services. APEX is an as-a-
Service portfolio of scalable and elastic storage resources. The storage as-a-
Service model simplifies the storage process.
SNAP I/O
Balances I/O performance. CPUs share one adapter, which prevents data from
traversing the inter-processor link when accessing remote memory.
SP
A service provider (SP) is a company that provides its subscribers access to the
internet.
STP cable
Shielded Twisted Pair (STP) Ethernet cable that is commonly used for high-speed
networks. A metallic substance shields STP. An additional metal foil wraps each set
of twisted wire pairs together.
UEFI boot
Unified Extensible Firmware Interface (UEFI). UEFI secure boot prevents systems
from booting from unsigned or unauthorized preboot device firmware, applications,
and operating system boot loaders. Without secure boot enabled, systems are
vulnerable to malware corrupting the startup process. UEFI is a firmware interface
that connects the firmware to the operating system. UEFI initializes the hardware
components and starts the operating system.
UTP cable
Unshielded Twisted Pair (UTP) Ethernet cable that is commonly used between a
system and wall. It is also used for desktop communication applications.
VM
A Virtual Machine (VM) is a software-defined computer system that emulates an
actual computer, including operating system and applications.
vSAN
According to VMware: Virtual Storage Area Network (vSAN) is a software-defined,
enterprise storage solution that integrates virtual machines (VMs) and containers
to support hyper-converged infrastructure (HCI) systems.