Professional Documents
Culture Documents
Private Cloud App x9!2!8377338
Private Cloud App x9!2!8377338
1 EIS Checklist
Installation Checklist for the ORACLE® PRIVATE CLOUD APPLIANCE
(PCA) X9-2 / 3.0.1
1.1 02 March Updated the CN Field Installation Checklist as well as the Rack Rules in Appendix A
2022
1.2 15 March Added work around for NTP server problem, mysql server credential update, spelling errors
2022
1.4 16 June Updated Site Requirements and Network Requirements to reflect feedback from field. Reordered section on
2022 CN provisioning to better support patching.
1.5 15 July Added link to MOS article for those who don't have Confluence access. Added Appedix G: Firewall Ports.
2022
Customer:
Task Number:
Technician:
Date:
Overview
This document is provided as guidance to Oracle Field Personnel who will be installing the Private Cloud Appliance version 3.0.1. The EIS checklist
provides a framework for referencing the Private Cloud Appliance 3.0.1 Installation Guide but also includes important internal only content, taken from
various engineering specifications. Please be aware, while we strongly recommend installation services, the PCA X9-2 3.0.1 is customer installable. The
customer should reference the PCA X9-2 3.0.1 Installation Guide. Please NOT hand this checklist over to the customer.
Be sure to check the EIS web page for the latest versions prior to commencing the installation.
https://eis.us.oracle.com/checklists/
The purpose of this checklist is to help the installer achieve a "good" installation.
Installers must have attended the appropriate training classes. EIS checklists are not a replacement for proper training.
Use of a laptop is required.
Feedback on issues with EIS content or product quality is welcome. Oracle staff should enter comments section of the following confluence page:
Partners should contact the PartnerHelp Portal for assistance and feedback.
Table of Contents
Installation Checklist for the ORACLE® PRIVATE CLOUD APPLIANCE (PCA) X9-2 / 3.0.1
Overview
Table of Contents
Opening an SR and getting support
Glossary
Preparation Before Going on Site
Site Requirements
MOS Requirements
Patch Requirements
Network Requirements
Connected Services Requirements
Plan for installing CN's in the field
Install Rack
Unpack and move into place
Install field installable CN's
Re-route PDU cables if necessary
Move Rack into place
Connect to Customer Networking Infrastructure
Power On for the First Time
Verify ZS Appliance is available and healthy
Verify Management Node ILOM Configuration
Boot and Verify the Management Node Cluster
Day 0 Configuration Prechecks
Day 0 Configuration
First Time Access to the Service Enclave
Connecting to ASR
Verify Health
Provision CN's
Software Patch/Upgrade
Verify local yum repository
Prepare PCA for Patching/Upgrade
Assess the Patches to be installed
Initiate patch/upgrade process
Verify patch/upgrade completed successfully
Install and Power on Field Installable Compute Nodes
Connect to Platinum Services
Change Your default passwords on all components
Install Complete
Appendix A: Rack Constraints
Rack Rules
Rack Elevations
Appendix B: Data Switch Cabling Reference
Cable Type and Part #'s
Data Switch Connection Reference
Appendix C: Management Switch Cabling Reference
Cable Type and Part #'s
Management Switch Connection Reference
Appendix D: ZS Appliance Cluster Cabling Reference
Appendix E: Power Scheme Reference
15KVA (Single and Three Phase)
Storage Enclosure Power Cabling
Compute Node and Switch Power Cabling
22KVA Single Phase
Storage Enclosure Power Cabling
Compute Node and Switch Power Cabling
24KVA Three Phase
Storage Enclosure Power Cabling
Compute Node and Switch Power Cabling
Appendix F: Default Logins and Passwords
Appendix G: Firewall Ports
1. Contact the HUB referencing the Installation Service Request and ask to have a new Technical Service Request created
a). GCH Handling Callbacks on Existing Technical SRs : GCSGCH (Doc ID 1803749.1)
b). If the FE is running into problems, they can ask for the Oncall Duty Manager for assistance
a). GCH Handling Callbacks on Existing Technical SRs : GCSGCH (Doc ID 1803749.1)
b). Regarding the process, the IC will need to create a technical SR and not a collab (to avoid chance of routing to the wrong group, ed.)
c). Recommend the ICs follow the same process the HUB engineer would follow on Doc ID 1803749.1 starting at task 55 and then relate the Install
SR with the Technical SR as a backup solution
Glossary
Acronym Term Definition
(abbr.)
CN Compute Node
MN Management Node One three servers configured in a cluster (pcamn01, pcamn02, pcamn03)
Flex Bay Flex Bay A grouping of 4 RU slots that are configurable by the customer, as Storage or Compute. A PCA supports
4 Flex Bay's:
Rule of Three Rule of Three The number of CN's must be a multiple of three.
Day 0 Day 0 The wizard that walks the user through the initial setup
7.1 System
Components
Checklist
7.2 Data Center
Room Checklist
7.3 Data Center
Environmental
Checklist
7.4 Access Route
Checklist
7.5 Facility Power
Checklist
7.6 Safety Checklist
7.7 Logistics Checklist
Prior to on-site, it may be useful to download or print out any required reference material such as the [PCA 3.0.x] Day0
Installation Guide, KM documents, run books, etc. that are referenced throughout this document. Pre_Checks and
Post_Checks (Doc ID
2859427.1)
How is power delivered to the rack in the data center? Form a trough above the rack or below through the floor?
NOTE: Currently, by default, PDU cables are routed down through the bottom of the rack. If the PDU cables need to
be routed up through the top of the rack, due to the density of the rack, please plan on an hour of work to re-route
the cables. We advise this should be done with two people.
Installing a Bastion Server in the PCA rack is supported by exception only. Therefore, not covered in the EIS PCA engineering
documentation.
References Time Check
MOS Requirements
Confirm customer MOS access and account settings. Doc ID 2 hours
1329200.1
Asset must be in CSI and the customer must have administrative access to the asset, This will be needed for both the local YUM
repository and ASR activation CSI
Administration
If the customer already has a local yum server configured and running they can simply add the CSI to their account at linux.oracle. Setting up a
com and add the PCA channels to that local yum server. If not, they will need to create an OL instance either on baremetal or a local ULN
VM external to the PCA. Follow the standard documentation on setting up a local yum server. OL7 is recommended as the uln- mirror
yum-mirror RPM for OL8 is not functional at the time this is written.
Dark or Secure sites typically use a system that bridges between a secure internal network and the internet, so as above the
server can be disconnected from the internet (shutdown external interface, bring up internal interface) when client PCA(s) on the
secure network need to be updated. 1 day
Note: There are also methods to get patching done such as manually copying the contents of the patching directory to a system
on the network and running a simple HTTP server (e.g. "python -m SimpleHTTPServer 8000") from a directory with the correct
permissions and then pointing the client PCA to it.
There are likely to be five channels for the purpose of patching the PCA:
PCA 3.0.1 MN
During install, connectivity and access is REQUIRED to be done from an FE laptop. 10 min
In rack bastion is supported by exception only. Engineering will need to assess power usage. PCA Engineering Exception process
Will the customer connect to Platinum Services? Platinum Support TBD - Contact Product Management.
The factory only installs CN's based on the rule of 3. Any additional CN's are to be installed in the field. CN Field Install Checklist.xlsx
(Link to download from MOS)
1 hour
PDU Type dictates RU usage and the number of supported components. Appendix E: Power Scheme
Reference
Appendix C: Management
Switch Cabling Reference
Install Rack
Reference Time Check
Unpack and move into place
Note: Please ensure any local state, federal, country, rules and regulations are followed in an appropriate manner.
Please follow the instructions printed on either end of the cardboard carton for removing the shrink wrap, banding and 30 min
cardboard. Note and follow the seven steps for removing the rack from the pallet shown on labels on either ramp attached to the
pallet.
If power and network cabling is required, install the cabling and server BEFORE moving into the data center.
NOTE: Do NOT connect power cables at this time. Power cables will be connected after Day 0 configuration is complete.
If power and network cabling is NOT required, the Compute Node can be added to the Flex Bay at any time.
At the top of the PDU remove the two outside torx screws from the PDU end plate 1 hour for 2 people
At the bottom unscrew the lower torx screws that connect the bottom bracket to the rack rail so it is free from the rack
Remove the PDU cables from the shipping brackets mid rack
Feed the cables (roughly 6 feet of lenth) through the side rails so that the entire cable is outside the right side of the rack
Release the velcro scraps anchoring the PDU cables to the lower half of the PDU's
Velcro the PDU cables to the anchor points on the upper half of the PDU's.
Install Guide:
Connect a Laptop for Initial Access to the PCA X9 Install Guide does not address use of Port 1, Service
Port. Nor does it address setting up the correct IP
To gain initial access to the Oracle Private Cloud Appliance Dashboard, you must Address and netmask to be able to access the ILOM's.
connect directly to the Cisco Nexus 9348GC-FXP management switch. FE laptops
should connect to Port 1. Customer Bastion or workstations should connect to Port 2. 15 min
Connect an FE laptop to the Ethernet cable in Port 1 with the following IP address and
netmask: 100.96.3.253/22.
NOTE: This differs from from the Install Guide in that this will allow access to the subnet
ranges 10.96.0.0 - 100.96.1.255 and range 100.96.2.0 - 100.96.3.255, which provides
access to the component ILOM ports.
To connect a customer provided Bastion or workstation to Port 2 use the following IP Install Guide Section 5.3 15 min
address and netmask: 100.96.3.254/23
Note: this only provides access to subnet ranges 100.96.2.0 - 100.96.3.25. To access
ILOM ports you will need to log into the Management Node, then ssh to the ILOM ports.
Log in = root/Welcome1
Caution
You will likely see the host name of the storage heads show up as 'sn01AKxxxxxxxx' and 'sn02AKxxxxxxxx'. While
this is inconsistent depending on which interface you're viewing, it is expected.
Run a basic status command to verify general health. The output is not important. We are only looking to see that the command is
responsive. This will show that the ZFSSA management software (akd) is alive and responsive.
ssh root@100.96.2.4
Password:
Warning: Permanently added '100.96.2.4' (ECDSA) to the list of known hosts.
Last login: Thu Dec 16 05:13:22 2021 from 100.96.2.34
Children:
resources => Configure resources
sn0XXXXXXXXXXX:configuration cluster>
If the cluster status is something other than the status above, issue a failback. The fail back should take about 30 to 60 seconds
and then return the prompt. The status should be checked to verify the CLUSTERED CLUSTERED status. If the verification is
correct, then ZFS Verification is Complete.
sn0XXXXXXXXXXX:> exit
Log in = root/Welcome1
100.96.0.33 ilom-pcamn01
100.96.0.34 ilom-pcamn02
100.96.0.35 ilom-pcamn03
Verify each Management Nodes Key Identity Properties using "show /System". The critical attributes are:
/System
Targets:
Open_Problems (0)
Processors
Memory
Power
Cooling
Storage
Networking
PCI_Devices
Firmware
BIOS
Log
Properties:
health = OK
health_details = -
open_problems_count = 0
type = Rack Mount
model = PCA X9-2 Base
qpart_id = Q13719
part_number = 7603900
serial_number = AK00842951
rfid_serial_number = 341A583DE58000000007E354
component_model = ORACLE SERVER X9-2
component_part_number = 8209083 PCA X9-2 MN
component_serial_number = 2139XLD01B
chassis_model = ORACLE SERVER X9-2
chassis_part_number = 8209083
chassis_serial_number = 2139XLD01B
system_identifier = Oracle Private Cloud Appliance X9-2 AKxxxxxxxx
system_fw_version = 5.0.2.20.a
primary_operating_system = Not Available
primary_operating_system_detail = Comprehensive System monitoring is not
available. Ensure the host is running
with the Hardware Management Pack. For
details go to
http://www.oracle.com/goto/ilom-redirect
/hmp
host_primary_mac_address = a8:69:8c:0a:2a:30
ilom_address = 100.96.0.33
ilom_mac_address = A8:69:8C:0A:2A:33
locator_indicator = Off
power_state = On
actual_power_consumption = 357 watts
action = (Cannot show property)
Commands:
cd
reset
set
show
start
stop
Verify and adjust each management nodes ILOM time.
Caution
At this point there is no NTP or automated time sync process. Please check the time in each Management Nodes
ILOM. The best practice is to get the times as close as possible. Once Day 0 is complete the time will resync based
on the NTP server in the customers network.
Where MMDDhhmmYYYY is the month, date, hour, and minute as two digits, and the year as four digits.
timezone is the 3 or 4 alphanumeric string representing the time zone - Use UTC
Properties:
datetime = Wed Feb 9 19:29:42 2022
timezone = GMT (GMT)
uptime = 38 days, 15:13:03
usentpserver = enabled
Commands:
cd
set
show
->
Be aware, the three management nodes boot into a cluster. The best practice is to boot "pcamn01" first, wait a minute or two,
then power both "pcamn02" and "pcamn03". The Management Nodes can be brought up in two ways:
Please wait for System Login Prompt for all three nodes before continuing.
pcamn01 login:
From the Service laptop / workstation connected to the Cisco Switch, log into the MN VIP (100.96.2.32) and verify Management
Node / Cluster Health:
login/password = root/Welcome1
Verify there are 3 nodes configured, all are online and all 'Resource Groups' are 'Started'.
3 nodes configured
11 resource instances configured (1 DISABLED)
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
Verify each management node (by IP address) is a member of the cluster with a status of 'joined'.
This step is needed to allow the components of the PCA to use the NTP servers on the management nodes. If this
step is skipped, the management nodes will not allow NTP requests despite the clients being properly configured.
[root@pcamn02 ~]# for f in pcamn01 pcamn02 pcamn03 ;do echo $f; ssh $f "grep -qxF
'allow 100.96.0.0/22' /etc/chrony.conf || echo 'allow 100.96.0.0/22' >> /etc/chrony.
conf; systemctl restart chronyd.service";done
Caution
The Primary MN of the cluster will have capabilities the other MN's do not. Be sure you are connected to the Primary
MN unless instructed otherwise. Connecting to the MN VIP
Caution
Following the checks in the above KM is a critical step to make sure the installation is successful.
ilom-pcacn001
ilom-pcacn002
ilom-pcacn003
pcacn001
pcacn002
pcacn003
The status of the compute nodes will transition as the discovery process progresses, until they get to 'ok', 'Ready_to_provision'.
Note
It can take a significant amount of time for the discover process to complete depending on the number of compute
nodes in the rack being installed. The general rules are:
The discovery process and the Day 0 process can run in parallel. Therefore, move to the Day 0 process, do NOT wait
for the CN discovery process to complete.
[root@pcamn01 ~]# python3 /usr/lib/python3.6/site-packages/pca_foundation/server/api
/test_hardware.py 253.255.0.31 list
Querying: https://253.255.0.31:8000/hardware?action=list
list :
[1, 'A8:69:8C:0B:6D:D3', '100.96.0.2', 'ilom-pcasn01', '', 'zfs-ilom',
'5.0.2.23', 'root', 1, 'ilom-AK00842951', 'OK', 'ignore', '3', None, None, None,
None, '100.96.2.2']
[2, 'A8:69:8C:0A:4D:0B', '100.96.0.3', 'ilom-pcasn02', '', 'zfs-ilom',
'5.0.2.23', 'root', -1, 'ilom-AK00842951', 'OK', 'ignore', '4', None, None, None,
None, '100.96.2.3']
[3, '3c:fd:fe:87:72:ca', '100.96.2.2', 'pcasn01', '', 'zfs', 'ak/SUNW,
maguroZ9@2013.06.05.8.40,1-2.40.4958.2', 'root', 1, 'AK00842951', 'On', 'Ready',
'1', '1', None, None, None, '100.96.0.2']
[4, '3c:fd:fe:92:27:72', '100.96.2.3', 'pcasn02', '', 'zfs', 'ak/SUNW,
maguroZ9@2013.06.05.8.40,1-2.40.4958.2', 'root', 1, 'AK00842951', 'On', 'Ready',
'2', 'None', None, None, None, '100.96.0.3']
[5, 'A8:69:8C:0A:2A:33', '100.96.0.33', 'ilom-pcamn01', '', 'mgmt-ilom', '',
'root', 1, 'ilom-AK00842951', 'OK', 'ignore', '13', None, None, None, None,
'100.96.2.33']
[6, 'A8:69:8C:0A:B8:7B', '100.96.0.34', 'ilom-pcamn02', '', 'mgmt-ilom', '',
'root', 1, 'ilom-AK00842951', 'OK', 'ignore', '14', None, None, None, None,
'100.96.2.34']
[7, 'A8:69:8C:15:81:5F', '100.96.0.35', 'ilom-pcamn03', '', 'mgmt-ilom', '',
'root', 1, 'ilom-AK00842951', 'OK', 'ignore', '15', None, None, None, None,
'100.96.2.35']
[8, '54:9f:c6:0d:df:a7', '100.96.2.1', 'pcaswmn01', '', 'switch-mgmt', '9.3(2)',
'admin', 1, 'FDO24451GK3', 'On', 'Ready', None, '26', None, None, None, None]
[9, 'bc:d2:95:a6:cb:74', '100.96.2.20', 'pcaswsp01', '', 'switch-spine', '9.3
(2)', 'admin', 1, 'FLM251507N9', 'On', 'Ready', None, '31', None, None, None, None]
[10, '34:73:2d:03:32:08', '100.96.2.21', 'pcaswsp02', '', 'switch-spine', '9.3
(2)', 'admin', 1, 'FLM251507N1', 'On', 'Ready', None, '32', None, None, None, None]
[11, '4c:5d:3c:40:bf:20', '100.96.2.22', 'pcaswlf01', '', 'switch-leaf', '9.3
(2)', 'admin', 1, 'FLM251503A7', 'On', 'Ready', None, '24', None, None, None, None]
[12, '34:73:2d:03:35:b0', '100.96.2.23', 'pcaswlf02', '', 'switch-leaf', '9.3
(2)', 'admin', 1, 'FLM251507MN', 'On', 'Ready', None, '25', None, None, None, None]
[13, 'a8:69:8c:0a:2a:30', '100.96.2.33', 'pcamn01', '', 'mgmt', '3.0.1', 'root',
1, 'AK00842951', 'On', 'ignore', '5', '5', None, None, None, '100.96.0.33']
[14, 'a8:69:8c:0a:b8:78', '100.96.2.34', 'pcamn02', '', 'mgmt', '3.0.1', 'root',
1, 'AK00842951', 'On', 'ignore', '6', '6', None, None, None, '100.96.0.34']
[15, 'a8:69:8c:15:81:5c', '100.96.2.35', 'pcamn03', '', 'mgmt', '3.0.1', 'root',
1, 'AK00842951', 'On', 'ignore', '7', '7', None, None, None, '100.96.0.35']
[16, 'a8:69:8c:15:61:2f', '100.96.0.64', 'ilom-pcacn001', '', 'compute-ilom',
'5.0.2.20.a', 'root', 1, 'ilom-2139XLD01R', 'OK', 'Ready_to_provision', '21', None,
None, None, None, '100.96.2.64']
[17, 'a8:69:8c:15:82:2f', '100.96.0.65', 'ilom-pcacn002', '', 'compute-ilom',
'5.0.2.20.a', 'root', 1, 'ilom-2139XLD01K', 'OK', 'Ready_to_provision', '22', None,
None, None, None, '100.96.2.65']
[18, '00:0b:38:be:22:34', '100.96.1.243', '', '', '', '', '', '', '', '',
'ignore', None, None, None, None, None, None]
[19, '00:0b:38:be:22:35', '100.96.1.244', '', '', '', '', '', '', '', '',
'ignore', None, None, None, None, None, None]
[20, 'a8:69:8c:15:82:3b', '100.96.0.66', 'ilom-pcacn003', '', 'compute-ilom',
'5.0.2.20.a', 'root', 1, 'ilom-2139XLD01P', 'OK', 'Ready_to_provision', '23', None,
None, None, None, '100.96.2.66']
[21, 'a8:69:8c:15:61:2c', '100.96.2.64', 'pcacn001', '', 'compute', 'PCA
Hypervisor:3.0.1-b526', 'root', 1, '2139XLD01R', 'On', 'ignore', '16', '10', 'b8:ce:
f6:96:ea:0c', '64', '1024', '100.96.0.64']
[22, 'a8:69:8c:15:82:2c', '100.96.2.65', 'pcacn002', '', 'compute', 'PCA
Hypervisor:3.0.1-b526', 'root', 1, '2139XLD01K', 'On', 'ignore', '17', '9', 'b8:ce:
f6:96:ea:7c', '64', '1024', '100.96.0.65']
[23, 'a8:69:8c:15:82:38', '100.96.2.66', 'pcacn003', '', 'compute', 'PCA
Hypervisor:3.0.1-b526', 'root', 1, '2139XLD01P', 'On', 'ignore', '20', '8', 'b8:ce:
f6:3e:68:7e', '64', '1024', '100.96.0.66']
The following data points are immutable values that cannot be edited or corrected after the Day 0 process
is committed. Please ensure these values are exactly correct according to customer expectations:
availability_domain
domain_name
System Name
fault_domain
realm
region
routing type (dynamic or static)
Public IP's - You may add to the list but you cannot modify entried already committed
At this point you will need the completed PCA-X9-Network-Configuration Worksheet that was filled out by the Customer. Install Guide,
Section 5.1
From the Service laptop / workstation connected to the Cisco Switch, using a web browser, connect to the MN VIP to launch
the to the Day 0 wizard. The Wizard will guide you through various interactive screens. Section 5.2
Note
This will bring you to the "Private Cloud Appliance First Boot" Screen where you will enter the customer admin
account credentials.
Warning
System name and Domain are immutable parameters. Once they are committed they cannot be changed.
Availability Domain
System Name
Domain
Rack Name
Description
Enter Routing information 15
minutes
Warning
The "routing type" field is an immutable parameter. One it is committed it cannot be changed.
Static*
Uplink gateway IP Address*
Spine virtual IP* (comma-separated values if using the 4 port dynamic mesh topology)
Uplink VLAN
Uplink HSRP Group
Dynamic*
Peer1 IP*
Peer1 ASN*
Peer2 IP
Peer2 ASN
Uplink Gateway
Oracle ASN
BGP Topology
BGP KeepAlive Timer
BGP HoldDown Timer
Enable MD5 Authentication
Caution
Configure the customers public IP address ranges ("public" meaning the customers enterprise access IP to the PCA system.
Not the internet.)
https://adminconsole.pcasys1.example.com
Note
During the first login to the Service Enclave UI, you will be presented with the ASR Configuration screen. At this time,
it is recommended to configure ASR. If you choose to opt out of configuring ASR at this time you will be able to
configure ASR any time in the future.
To log into the Service Enclave CLI using the administrative account created in the Day 0 wizard, ssh to the PCA-ADMIN> shell.
Caution
Please be aware there are two pca-admin shells. The Service Enclave Administrator Account "PCA-ADMIN>" and
the root "(pca-admin)" shell. Each provides unique functionality.
Note
Username*: Enter your Oracle Single Sign On (SSO) credentials, which can be obtained from My Oracle
Support.
Password*: Enter the password for your SSO account.
Proxy Username: To use a proxy host, enter a username to access that host.
Proxy Password: To use a proxy host, enter the password to access that host.
Proxy Host: To use a proxy host, enter the name of that host.
Proxy Port: To use a proxy host, enter the port used to access the host.
Endpoint: Destination endpoint ASR telemetry will be sent. Will be a either a "Direct Connection" or
through an "ASR Relay" such as a Platinum OASG or stand alone ASRManager.
Alternatively you may use the Service Enclave CLI to configure ASR
PCA-ADMIN> showallcustomcmds
Operation Name: <Related Object(s)>
-----------------------------------
abort: Job
asrClientDisable: ASRPhonehome
asrClientEnable: ASRPhonehome
asrClientRegister: ASRPhonehome
asrClientSendTestMsg: ASRPhonehome
asrClientUnregister: ASRPhonehome
[...]
Verify:
State = "ON"
Note
pcasn02 will show as 'not available'. But is health if the state is "on".
Log into Grafana by selecting "Monitoring" in the upper left of the Service Enclave UI
admin/Welcome1
In the Welcome to Grafana screen, select: Dashboards manage -> PCA 3.0 Service Advisor -> Platform Health Check
To see the logs for "not health" services: Explore Loki Log Labels Jobs <select service presented as not healthy in Platform
Health Check>
https://grafana.pcasys1.us.example.com/
Alternative method to verify health from the CLI's, log into the management node VIP (root/Welcome1)
Note
Multiple commands are needed to capture similar data shown in the SEUI Rack Units screen. Please keep in mind the
data is pulled from the identical underlying
structures.
For example, https://adminconsole.pcasys1.example.com where pcasys1 is the name of your Oracle Private Cloud
Appliance and example.com is your domain.
3 min
Select "Rack Units" per CN
Select "Actions" button for CN you want to provision
Select "Provision"
NOTE: Allow all CN's to complete the provisioning process before moving on.
repolist: 1,674
Patching Guide,
Prepare PCA for Patching/Upgrade Chapter 2 Step 4
Configure the management nodes to receive yum updates from the local YUM repository Chapter 3 All steps
Verify you have permissions to perform patching operations, RPM's are available and the PCA is ready to Patch
/Upgrade
Patch Guide
Assess the Patches to be installed
In the event multiple patches are released it is required to step through applying the patches in a specific order as
follows:
1. Host
2. MySQL Cluster
3. Vault and ETCD (these can be done in either order but must be done consecutively)
4. Kubernetes Cluster
5. Platform
6. Compute
7. Any Firmware
For example, https://adminconsole.pcasys1.example.com where pcasys1 is the name of your Oracle Private
Cloud Appliance and example.com is your domain.
Caution
Do not change passwords on individual components. For example, do not change an ILOM password by logging into
the ILOM of a component. Always use the pca-admin shell.
To access the root pca-admin shell, type "pca-admin" at the root prompt. No password is required:
Documented commands (use 'help -v' for verbose/'help <topic>' for details):
===========================================================================
alias exit help macro quit
(pca-admin)
Change the password for the mysql database by running the following from the active managment node.
References Check
Install Complete
Ensure customer has access to all necessary resources https://www.oracle.com/assets/services-ovca-ds-1990356.pdf
Service Enclave
Customer Enclave
CLI access
Flex Bays:
Any Flex Bay can accommodate four compute nodes, two DE3-24P, or one DE3-24C
Unused Flex Bays are not cabled.
Components are installed in the following order, working from bottom up: CN's, DE3-23P then DE3-24C.
If one CN or one 24P is installed in a Flex Bay, the Flex Bay is committed and cabled for that component type (cannot mix component types)
The number of CN's installed in MFG is restricted to 3, 6, 9, 12, 15, 18, and 21. The "rule of 3". IF the number of CN's ordered is different, the odd
number
1) When the number of CN's ordered is 13, RU20 will need to be wired in the field
2) When the number of CN's ordered is 14, RU20 and RU21 will need to be wired in the field
3) When the number of CN's ordered is 17, RU34 will need to be wired in the field
Strings
Rack Elevations
Appendix B: Data Switch Cabling Reference
Cable Type and Part #'s
Data Switch Connection Reference
Appendix C: Management Switch Cabling Reference
Cable Type and Part #'s