Professional Documents
Culture Documents
Solution Architecture
Solution Architecture
1
Table Of Contents
Overview
Logical architecture
Network architecture
Greenplum architecture
2
Solution architecture
Overview
This section provides an overview of the components that are involved in this
solution from a physical and logical perspective. For this document, the Greenplum
solution is deployed on Dell PowerFlex rack. Initially, PowerFlex with a
disaggregated architecture (two-layer) system is deployed with Compute Only (CO)
nodes running ESXi hypervisor for compute and network, and Storage Only (SO)
nodes with Red Hat Enterprise Linux 7.9. After the PowerFlex is installed and
validated, Greenplum is installed on top of PowerFlex.
Logical architecture
The following figure illustrates the logical view of the Greenplum on PowerFlex with
ten storage only (SO) nodes and 12 compute only (CO) nodes. The Greenplum is
deployed on the CO nodes with one instance of master and ten segments.
3
Solution architecture
Figure 2. Logical architecture of Greenplum on PowerFlex
Each of the SO node is fully populated with ten 7.68 TB SAS SSD drives. From
PowerFlex storage layout perspective, a single PowerFlex cluster with two protection
domains is used. Since each of the PowerFlex node is fully populated with ten disks,
these 100 disks are used to create four storage pools from which the various
volumes are created. The 12 CO nodes which are the ESXi hosts has the SDC
component that makes the ESXi data stores available from the PowerFlex volumes.
Once the data stores become available, Greenplum VMs are created in the VCenter
with a single master and ten segments.
For more information about the detailed configuration of the SO nodes, CO nodes,
4
Solution architecture
and Greenplum master and segment VMs, see Greenplum configuration.
Network architecture
The following figure shows the two-layer network architecture that is based on
PowerFlex best practices:
Two Z9100 switches are configured with VLT to provide fault tolerance and
enable connectivity with other switches.
Three dual port 25 Gb Mellanox NICs on each server provide 6 x 25 Gb
ports.
On compute nodes, 2 x 25 Gb ports are NIC teamed to provide high
availability. Another 2x25 Gb ports for Greenplum interconnect.
Dedicated VLAN is configured to provide connectivity with the customer
network, similar VLAN is dedicated vmotion, and VLAN 105 is dedicated to
Hypervisor (ESXi) management.
5
Solution architecture
Figure 3. PowerFlex network architecture
Greenplum architecture
6
Solution architecture
domain 01 and the other half are using protection domain 02. Their corresponding
mirrors are using the opposite protection domains. These mirrors act as an additional
level of data protection; in case an entire protection domain goes down in PowerFlex.
7
Solution architecture