

It’s finally here! ODA patch 19.6 with EL7 and Oracle DB 19c (19.6.0.0.200114). In this blogpost, I will describe what steps I did and what issues I encountered as it is a big change from EL6 → EL7 and minor from 18c to 19c.
1. Resources used
- Oracle Database Appliance X7-2S, X7-2M, and X7-2-HA Patches (patches for other hardware models can be found on the same page)
- Patching Instruction Oracle Database Appliance
- OS patch (mandatory first step), note here is that /opt/oracle/dcs/bin/odacli update-server -c os –local should be /opt/oracle/dcs/bin/odacli update-server -v 19.6.0.0.0 -c os –local
- server, storage, grid, db patch
- Known Issues with Oracle Database Appliance in This Release
- Read through the known issues to anticipate changes I needed to do, lucky for me we just migrated to ODAX7-2 HA 18.7 this year. And applied the 18.8 patch recently as this is a pre-requisite
- ODA (Oracle Database Appliance): ODABR a System Backup/Restore Utility (Doc ID 2466177.1)
- Which is mandatory for the EL6 → EL7 upgrade but doesn’t seem to be installed by default? And only seems to be a requirement for BM
- Analyzing the Pre-Checks Report for Operating System Upgrades
- Saves your life (time) to read through it before actually doing the OS pre-patch
- Errors when running ORAchk or the odacli create-prepatchreport command
- They added Orachk in all the pre-patch reports and needless to say there were a fair amount of issues but as described in the link above can be ignored.
- /opt/oracle/dcs/log/dcs-agent.log (verify useful as odacli describe-job is pretty picky with the information it shows)
- How to Download and Run Oracle’s Database Pre-Upgrade Utility (Doc ID 884522.1)
2. Cleanup
First step I can recommend is to make sure you have enough space on /opt (location where patches and clones will end up) and / (/tmp on second node will be used to place the zip files before placing them in the repository). I used the following commands to do this:
/opt
1 2 3 4 |
odacli cleanup-patchrepo -cl # I don't need any of the clones anymore,nor any patches. Everything is applied and setup as it should be. odacli cleanup-patchrepo -comp GI,DB -v 18.6.0.0.0 odacli cleanup-patchrepo -comp GI,DB -v 18.7.0.0.0 odacli cleanup-patchrepo -comp GI,DB -v 18.8.0.0.0 |
/ (/tmp)
1 2 |
find / -xdev -type f -size +100M -exec ls -lhtr {} \; # Seems that /root/Extras is the main culprit in my case as this is used by the oda patch process and is not cleaned up. |
3. Start of upgrade
3.1 OS Upgrade
1 2 3 4 5 6 |
#download the zipfiles from the note above and store them somewhere on your ODA (I used an NFS because I have multiple ODAs to patch). wget zip 1 through 4 unzip p31010832_196000_Linux-x86-64_1of4.zip unzip p31010832_196000_Linux-x86-64_2of4.zip unzip p31010832_196000_Linux-x86-64_3of4.zip unzip p31010832_196000_Linux-x86-64_4of4.zip |
💡 Tip: Before running /opt/oracle/dcs/bin/odacli create-prepatchreport -v 19.6.0.0.0 -os I would really advise to go through Analyzing the Pre-Checks Report for Operating System Upgrades and ODA (Oracle Database Appliance): ODABR a System Backup/Restore Utility (Doc ID 2466177.1) . This will save you a failed run, for 1 I didn’t know about de scheduled jobs on ODA so it kinda threw me off. But it’s easy to disable as mentioned in the document. And as for the requirement with odabr, I would’ve suspected it being installed if it was mandatory but this was not the case so I had to manually install it.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 |
## upgrade os EL6 to EL7 # connect to ilom node1 start /SP/console /opt/oracle/dcs/bin/odacli update-repository -f /data/dump/oda-sm-19.6.0.0.0-200420-server1of4.zip,/data/dump/oda-sm-19.6.0.0.0-200420-server2of4.zip,/data/dump/oda-sm-19.6.0.0.0-200420-server3of4.zip,/data/dump/oda-sm-19.6.0.0.0-200420-server4of4.zip odacli describe-job -i e5268ad1-3f0b-4ffe-9451-44abec1934cb Job details ---------------------------------------------------------------- ID: e5268ad1-3f0b-4ffe-9451-44abec1934cb Description: Repository Update Status: Success Created: May 1, 2020 11:15:40 PM CEST Message: /data/dump/oda-sm-19.6.0.0.0-200420-server1of4.zip,/data/dump/oda-sm-19.6.0.0.0-200420-server2of4.zip,/data/dump/oda-sm-19.6.0.0.0-200420-server3of4.zip,/data/dump/oda-sm-19.6.0.0.0-200420-server4of4.zip Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Check AvailableSpace May 1, 2020 11:15:40 PM CEST May 1, 2020 11:15:40 PM CEST Success Setting up ssh equivalance May 1, 2020 11:15:41 PM CEST May 1, 2020 11:15:41 PM CEST Success Copy BundleFile May 1, 2020 11:15:41 PM CEST May 1, 2020 11:20:56 PM CEST Success Validating CopiedFile May 1, 2020 11:20:57 PM CEST May 1, 2020 11:21:35 PM CEST Success Unzip bundle May 1, 2020 11:21:35 PM CEST May 1, 2020 11:43:26 PM CEST Success Unzip bundle May 1, 2020 11:43:26 PM CEST May 1, 2020 11:45:34 PM CEST Success Delete PatchBundles May 1, 2020 11:45:35 PM CEST May 1, 2020 11:45:38 PM CEST Success Removing ssh keys May 1, 2020 11:45:38 PM CEST May 1, 2020 11:45:39 PM CEST Success /opt/oracle/dcs/bin/odacli update-dcsagent -v 19.6.0.0.0 odacli describe-job -i "ea84ceed-b744-4e71-b1e7-a6ac784b01de" /opt/oracle/dcs/bin/odacli create-prepatchreport -v 19.6.0.0.0 -os odacli describe-prepatchreport -i 5f26fb4d-8d34-44cf-b6db-c96b050065c1 # resolve isssues with prepatch vi /root/odaUpgrade_precheck2020.. # in my case I had some 3rd party rpms to backup os files and fuse-libs for dbfs (fuse-libs is now included again in EL7) yum remove fuse-libs ... <w/e is in the list> # disable schedules (should be 4 of them to be disabled and they'll be automatically enabled after the upgrade) odacli list-schedules odacli update-schedule -d -i a230b958-652b-467c-9054-2a34078095a8 ... # install odabr on both nodes # ODA (Oracle Database Appliance): ODABR a System Backup/Restore Utility (Doc ID 2466177.1) [root@by3aaa ~]# rpm -Uvh /data/dump/odabr-2.0.1-55.noarch.rpm warning: /data/dump/odabr-2.0.1-55.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID e7004b4d: NOKEY Preparing... ########################################### [100%] 1:odabr ########################################### [100%] ODABR has been installed on /opt/odabr succesfully! [root@by3aaa ~]# odacli describe-prepatchreport -i 3bb145de-57f3-480f-9d34-d0c6e583d6a4 Patch pre-check report ------------------------------------------------------------------------ Job ID: 3bb145de-57f3-480f-9d34-d0c6e583d6a4 Description: Patch pre-checks for [OS] Status: SUCCESS Created: May 2, 2020 2:05:25 AM CEST Result: All pre-checks succeeded Node Name --------------- by3aaa Pre-Check Status Comments ------------------------------ -------- -------------------------------------- __OS__ Validate supported versions Success Validated minimum supported versions. Validate patching tag Success Validated patching tag: 19.6.0.0.0. Is patch location available Success Patch location is available. Validate if ODABR is installed Success Validated ODABR is installed Validate if ODABR snapshots Success No ODABR snaps found on the node. exist Validate LVM free space Success LVM free space: 232(GB) on node:by3aaa Space checks for OS upgrade Success Validated space checks. Install OS upgrade software Success Extracted OS upgrade patches into /root/oda-upgrade. Do not remove this directory untill OS upgrade completes. Verify OS upgrade by running Success Results stored in: preupgrade checks '/root/preupgrade-results/ preupg_results-200502021242.tar.gz' . Read complete report file '/root/preupgrade/result.html' before attempting OS upgrade. Validate custom rpms installed Success No additional RPMs found installed on node:by3aaa. Scheduled jobs check Success Verified scheduled jobs Node Name --------------- by3aab Pre-Check Status Comments ------------------------------ -------- -------------------------------------- __OS__ Validate supported versions Success Validated minimum supported versions. Validate patching tag Success Validated patching tag: 19.6.0.0.0. Is patch location available Success Patch location is available. Validate if ODABR is installed Success Validated ODABR is installed Validate if ODABR snapshots Success No ODABR snaps found on the node exist Validate LVM free space Success LVM free space: 232(GB) on node:by3aab Space checks for OS upgrade Success Validated space checks. Install OS upgrade software Success Extracted OS upgrade patches into /root/oda-upgrade. Do not remove this directory untill OS upgrade completes. Verify OS upgrade by running Success Results stored in: preupgrade checks '/root/preupgrade-results/ preupg_results-200502021946.tar.gz' . Read complete report file '/root/preupgrade/result.html' before attempting OS upgrade. Validate custom rpms installed Success No additional RPMs found installed on node:by3aab. Scheduled jobs check Success Verified scheduled jobs [root@by3aaa ~]# /opt/oracle/dcs/bin/odacli update-server -v 19.6.0.0.0 -c os --local # progression can be followed on ilom screen, logfiles are /root/odaUpgrade2020...log and /root/debug_upgrade.log # my upgrade took about 30 minutes but the update-server command will show that it can take up to 60 minutes depending on platform: **************************************************************************** * Depending on the hardware platform, the upgrade operation, including * * node reboot, may take 30-60 minutes to complete. Individual steps in * * the operation may not show progress messages for a while. Do not abort * * upgrade using ctrl-c or by rebooting the system. * **************************************************************************** [root@by3aaa ~]# /opt/oracle/dcs/bin/odacli update-server-postcheck -v 19.6.0.0.0 Upgrade post-check report ------------------------- Node Name --------------- by3aaa Comp Pre-Check Status Comments ----- ------------------------------ -------- -------------------------------------- OS OS upgrade check SUCCESS OS has been upgraded to OL7 GI GI upgrade check INFO GI home needs to update to 19.6.0.0.200114 GI GI status check SUCCESS Clusterware is running on the node OS ODABR snapshot WARNING ODABR snapshot found. Run 'odabr delsnap' to delete. RPM Extra RPM check SUCCESS No extra RPMs found when OS was at OL6 Node Name --------------- by3aab Comp Pre-Check Status Comments ----- ------------------------------ -------- -------------------------------------- OS OS upgrade check ERROR OS has not been upgraded to OL7 [root@by3aaa ~]# /opt/odabr/odabr delsnap INFO: 2020-05-02 03:03:06: Please check the logfile '/opt/odabr/out/log/odabr_83304.log' for more details INFO: 2020-05-02 03:03:06: Removing LVM snapshots INFO: 2020-05-02 03:03:06: ...removing LVM snapshot for 'opt' SUCCESS: 2020-05-02 03:03:06: ...snapshot for 'opt' removed successfully INFO: 2020-05-02 03:03:06: ...removing LVM snapshot for 'u01' SUCCESS: 2020-05-02 03:03:06: ...snapshot for 'u01' removed successfully INFO: 2020-05-02 03:03:06: ...removing LVM snapshot for 'root' SUCCESS: 2020-05-02 03:03:06: ...snapshot for 'root' removed successfully SUCCESS: 2020-05-02 03:03:06: Remove LVM snapshots done successfully [root@by3aaa ~]# /opt/oracle/dcs/bin/odacli update-server-postcheck -v 19.6.0.0.0 Upgrade post-check report ------------------------- Node Name --------------- by3aaa Comp Pre-Check Status Comments ----- ------------------------------ -------- -------------------------------------- OS OS upgrade check SUCCESS OS has been upgraded to OL7 GI GI upgrade check INFO GI home needs to update to 19.6.0.0.200114 GI GI status check SUCCESS Clusterware is running on the node OS ODABR snapshot SUCCESS No ODABR snapshots found RPM Extra RPM check SUCCESS No extra RPMs found when OS was at OL6 # now execute the same steps as above (connect to node 2 ilom) [root@by3aab ~]# /opt/oracle/dcs/bin/odacli update-server -v 19.6.0.0.0 -c os --local # progression can be followed on ilom screen, logfiles are /root/odaUpgrade2020...log and /root/debug_upgrade.log [root@by3aab ~]# /opt/oracle/dcs/bin/odacli update-server-postcheck -v 19.6.0.0.0 [root@by3aab ~]# /opt/odabr/odabr delsnap [root@by3aab ~]# /opt/oracle/dcs/bin/odacli update-server-postcheck -v 19.6.0.0.0 Upgrade post-check report ------------------------- Node Name --------------- by3aaa Comp Pre-Check Status Comments ----- ------------------------------ -------- -------------------------------------- OS OS upgrade check SUCCESS OS has been upgraded to OL7 GI GI upgrade check INFO GI home needs to update to 19.6.0.0.200114 GI GI status check SUCCESS Clusterware is running on the node OS ODABR snapshot SUCCESS No ODABR snapshots found RPM Extra RPM check SUCCESS No extra RPMs found when OS was at OL6 Node Name --------------- by3aab Comp Pre-Check Status Comments ----- ------------------------------ -------- -------------------------------------- OS OS upgrade check SUCCESS OS has been upgraded to OL7 GI GI upgrade check INFO GI home needs to update to 19.6.0.0.200114 GI GI status check SUCCESS Clusterware is running on the node OS ODABR snapshot SUCCESS No ODABR snapshots found RPM Extra RPM check SUCCESS No extra RPMs found when OS was at OL6 |
3.2 ODA patch (server, storage, grid, db)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
[root@by3aaa ~]# /opt/oracle/dcs/bin/odacli update-dcsadmin -v 19.6.0.0.0 [root@by3aaa ~]# /opt/oracle/dcs/bin/odacli update-dcscomponents -v 19.6.0.0.0 [root@by3aaa ~]# /opt/oracle/dcs/bin/odacli create-prepatchreport -s -v 19.6.0.0.0 # following resulted after failed prepatch (dcsagent log mentions /opt/oracle/dcs/oracle.ahf/orachk/SERVER/22ed5af2-f182-4dcf-a798-1dacc3afc003/orachk_by3aaa_050220_034524/orachk_by3aaa_050220_034524.html which show orachk part, most can be ignored as shown here https://docs.oracle.com/en/engineered-systems/oracle-database-appliance/19.6/cmtrn/issues-with-oda-odacli.html#GUID-A2001E37-28E0-4406-893D-3BD402E0ECB5) ls -l /u01/app/oraInventory/locks rm -rf /u01/app/oraInventory/locks #(if both nodes) [root@by3aaa ~]# /opt/oracle/dcs/bin/odacli update-server -v 19.6.0.0.0 # creates new grid home (/u01/app/19.0.0.0/grid), this went really smooth no issues. [root@by3aaa ~]# odacli describe-job -i d196fb27-093f-4c68-b5e0-a3f26b04eac2 Job details ---------------------------------------------------------------- ID: d196fb27-093f-4c68-b5e0-a3f26b04eac2 Description: Server Patching Status: Success Created: May 2, 2020 4:07:28 AM CEST Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Patch location validation May 2, 2020 4:07:46 AM CEST May 2, 2020 4:07:46 AM CEST Success Patch location validation May 2, 2020 4:07:46 AM CEST May 2, 2020 4:07:46 AM CEST Success dcs-controller upgrade May 2, 2020 4:07:47 AM CEST May 2, 2020 4:07:47 AM CEST Success dcs-controller upgrade May 2, 2020 4:07:47 AM CEST May 2, 2020 4:07:47 AM CEST Success Creating repositories using yum May 2, 2020 4:07:48 AM CEST May 2, 2020 4:07:48 AM CEST Success Applying HMP Patches May 2, 2020 4:07:48 AM CEST May 2, 2020 4:07:49 AM CEST Success Patch location validation May 2, 2020 4:07:49 AM CEST May 2, 2020 4:07:49 AM CEST Success Patch location validation May 2, 2020 4:07:49 AM CEST May 2, 2020 4:07:49 AM CEST Success oda-hw-mgmt upgrade May 2, 2020 4:07:49 AM CEST May 2, 2020 4:07:49 AM CEST Success oda-hw-mgmt upgrade May 2, 2020 4:07:49 AM CEST May 2, 2020 4:07:49 AM CEST Success OSS Patching May 2, 2020 4:07:49 AM CEST May 2, 2020 4:07:50 AM CEST Success Applying Firmware Disk Patches May 2, 2020 4:08:05 AM CEST May 2, 2020 4:08:16 AM CEST Success Applying Firmware Expander Patches May 2, 2020 4:08:24 AM CEST May 2, 2020 4:08:33 AM CEST Success Applying Firmware Controller Patches May 2, 2020 4:08:39 AM CEST May 2, 2020 4:08:46 AM CEST Success Checking Ilom patch Version May 2, 2020 4:08:47 AM CEST May 2, 2020 4:08:49 AM CEST Success Checking Ilom patch Version May 2, 2020 4:08:49 AM CEST May 2, 2020 4:08:51 AM CEST Success Patch location validation May 2, 2020 4:08:51 AM CEST May 2, 2020 4:08:52 AM CEST Success Patch location validation May 2, 2020 4:08:51 AM CEST May 2, 2020 4:08:52 AM CEST Success Save password in Wallet May 2, 2020 4:08:53 AM CEST May 2, 2020 4:08:53 AM CEST Success Apply Ilom patch May 2, 2020 4:08:53 AM CEST May 2, 2020 4:16:46 AM CEST Success Apply Ilom patch May 2, 2020 4:16:46 AM CEST May 2, 2020 4:24:43 AM CEST Success Copying Flash Bios to Temp location May 2, 2020 4:24:43 AM CEST May 2, 2020 4:24:43 AM CEST Success Copying Flash Bios to Temp location May 2, 2020 4:24:43 AM CEST May 2, 2020 4:24:43 AM CEST Success Patch location validation May 2, 2020 4:24:43 AM CEST May 2, 2020 4:24:43 AM CEST Success ASR Manager RPM update May 2, 2020 4:24:43 AM CEST May 2, 2020 4:24:43 AM CEST Success Starting the clusterware May 2, 2020 4:27:42 AM CEST May 2, 2020 4:29:30 AM CEST Success Creating GI home directories May 2, 2020 4:29:32 AM CEST May 2, 2020 4:29:32 AM CEST Success Cloning Gi home May 2, 2020 4:29:32 AM CEST May 2, 2020 4:32:00 AM CEST Success Cloning Gi home May 2, 2020 4:32:00 AM CEST May 2, 2020 4:34:30 AM CEST Success Configuring GI May 2, 2020 4:34:30 AM CEST May 2, 2020 4:38:20 AM CEST Success Running GI upgrade root scripts May 2, 2020 4:54:11 AM CEST May 2, 2020 5:04:10 AM CEST Success Resetting DG compatibility May 2, 2020 5:04:40 AM CEST May 2, 2020 5:05:10 AM CEST Success Running GI config assistants May 2, 2020 5:05:10 AM CEST May 2, 2020 5:07:02 AM CEST Success restart oakd May 2, 2020 5:07:13 AM CEST May 2, 2020 5:07:24 AM CEST Success Updating GiHome version May 2, 2020 5:07:24 AM CEST May 2, 2020 5:07:31 AM CEST Success Updating GiHome version May 2, 2020 5:07:24 AM CEST May 2, 2020 5:07:33 AM CEST Success Update System version May 2, 2020 5:07:40 AM CEST May 2, 2020 5:07:40 AM CEST Success Update System version May 2, 2020 5:07:40 AM CEST May 2, 2020 5:07:40 AM CEST Success preRebootNode Actions May 2, 2020 5:07:40 AM CEST May 2, 2020 5:08:22 AM CEST Success preRebootNode Actions May 2, 2020 5:08:22 AM CEST May 2, 2020 5:09:07 AM CEST Success Reboot Ilom May 2, 2020 5:09:07 AM CEST May 2, 2020 5:09:07 AM CEST Success Reboot Ilom May 2, 2020 5:09:07 AM CEST May 2, 2020 5:09:07 AM CEST Success [root@by3aaa ~]# /opt/oracle/dcs/bin/odacli update-storage -v 19.6.0.0.0 --rolling [root@by3aaa ~]# odacli describe-job -i 02f14d2d-e81a-4511-ba3f-ddc32618482a Job details ---------------------------------------------------------------- ID: 02f14d2d-e81a-4511-ba3f-ddc32618482a Description: Storage Firmware Patching Status: Success Created: May 2, 2020 12:58:11 PM CEST Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Applying Firmware Disk Patches May 2, 2020 12:58:11 PM CEST May 2, 2020 12:58:33 PM CEST Success Applying Firmware Controller Patches May 2, 2020 12:58:33 PM CEST May 2, 2020 12:58:46 PM CEST Success # for dbhomeid in `odacli list-dbhomes` that needs to be upgraded: /opt/oracle/dcs/bin/odacli create-prepatchreport --dbhome --dbhomeid 221f7223-69e7-485c-a4dc-09076365ab41 -v 19.6.0.0.0 # stop standby databases, it ends up causing trouble otherwise srvctl stop database -d BIOIMP_DG /opt/oracle/dcs/bin/odacli update-dbhome --dbhomeid 221f7223-69e7-485c-a4dc-09076365ab41 -v 19.6.0.0.0 odacli describe-job -i ef4358db-f6a6-4478-b7d2-39616303981e Job details ---------------------------------------------------------------- ID: ef4358db-f6a6-4478-b7d2-39616303981e Description: DB Home Patching: Home Id is 221f7223-69e7-485c-a4dc-09076365ab41 Status: Success Created: May 2, 2020 2:52:56 PM CEST Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Validating dbHome available space May 2, 2020 2:53:09 PM CEST May 2, 2020 2:53:09 PM CEST Success Validating dbHome available space May 2, 2020 2:53:09 PM CEST May 2, 2020 2:53:09 PM CEST Success clusterware patch verification May 2, 2020 2:53:52 PM CEST May 2, 2020 2:53:58 PM CEST Success clusterware patch verification May 2, 2020 2:53:52 PM CEST May 2, 2020 2:54:13 PM CEST Success Patch location validation May 2, 2020 2:54:13 PM CEST May 2, 2020 2:54:19 PM CEST Success Patch location validation May 2, 2020 2:54:13 PM CEST May 2, 2020 2:54:13 PM CEST Success Opatch update May 2, 2020 2:54:51 PM CEST May 2, 2020 2:54:54 PM CEST Success Opatch update May 2, 2020 2:54:51 PM CEST May 2, 2020 2:54:51 PM CEST Success Patch conflict check May 2, 2020 2:54:54 PM CEST May 2, 2020 2:54:54 PM CEST Success Patch conflict check May 2, 2020 2:54:54 PM CEST May 2, 2020 2:57:22 PM CEST Success db upgrade May 2, 2020 2:57:22 PM CEST May 2, 2020 2:57:22 PM CEST Success db upgrade May 2, 2020 2:57:22 PM CEST May 2, 2020 3:30:02 PM CEST Success SqlPatch upgrade May 2, 2020 3:30:02 PM CEST May 2, 2020 3:31:13 PM CEST Success SqlPatch upgrade May 2, 2020 3:31:13 PM CEST May 2, 2020 3:32:12 PM CEST Success SqlPatch upgrade May 2, 2020 3:32:12 PM CEST May 2, 2020 3:33:08 PM CEST Success SqlPatch upgrade May 2, 2020 3:33:08 PM CEST May 2, 2020 3:33:11 PM CEST Success SqlPatch upgrade May 2, 2020 3:33:11 PM CEST May 2, 2020 3:33:42 PM CEST Success Update System version May 2, 2020 3:33:42 PM CEST May 2, 2020 3:33:42 PM CEST Success Update System version May 2, 2020 3:33:42 PM CEST May 2, 2020 3:33:42 PM CEST Success updating the Database version May 2, 2020 3:34:14 PM CEST May 2, 2020 3:34:25 PM CEST Success updating the Database version May 2, 2020 3:34:25 PM CEST May 2, 2020 3:34:35 PM CEST Success updating the Database version May 2, 2020 3:34:36 PM CEST May 2, 2020 3:34:46 PM CEST Success updating the Database version May 2, 2020 3:34:46 PM CEST May 2, 2020 3:34:57 PM CEST Success updating the Database version May 2, 2020 3:34:57 PM CEST May 2, 2020 3:35:08 PM CEST Success # validate # cd /u01/app/oracle/product/18.0.0.0/dbhome_1/cfgtoollogs/ # cd /u01/app/oracle/cfgtoollogs/sqlpatch # finish by deleting software zipfiles and installing the rpms you deleted. |
show parameter listener
and show spparameter listener
to validate everything. Both after grid and db patch.🐛12-MAY-2020: asr doesn’t startup automatically due to wrong java home configured. Details can be found at the bottom of this page.
3.3 Install 19.6 clone and upgrade database
💡 Tip: run Pre-Upgrade Information Tool (preupgrade.jar) Command before executing the upgrade-database command as it will save you another failed run 😉, this can be run from the new 19c home as it comes with preupgrade.jar (/u01/app/oracle/product/19.0.0.0/dbhome_1/rdbms/admin/preupgrade.jar). But the best approach would be to download the latest version from How to Download and Run Oracle’s Database Pre-Upgrade Utility (Doc ID 884522.1).
List of all the checks run by the preupgrade.jar can be found here Database Preupgrade tool check list. (Doc ID 2380601.1)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
#wget 19.6 clone note 30403662 cd /data/dump unzip p30403662_196000_Linux-x86-64.zip /opt/oracle/dcs/bin/odacli update-repository -f /data/dump/odacli-dcs-19.6.0.0.0-200326-DB-19.6.0.0.zip /opt/oracle/dcs/bin/odacli describe-job -i "faac096b-09cd-4297-b7f0-49d5dcb1b311" /opt/oracle/dcs/bin/odacli create-dbhome -v 19.6.0.0.200114 /opt/oracle/dcs/bin/odacli list-databases /opt/oracle/dcs/bin/odacli list-dbhomes /opt/oracle/dcs/bin/odacli upgrade-database -i 3be694c3-4762-4234-83f2-97aaefc89901 -from 221f7223-69e7-485c-a4dc-09076365ab41 -to de620791-d236-43a3-92fe-473c254abd0c [root@by3aaa ~]# odacli describe-job -i 54ed7632-5c48-4a52-9d3d-701df7eb8100 Job details ---------------------------------------------------------------- ID: 54ed7632-5c48-4a52-9d3d-701df7eb8100 Description: Database service upgrade with db ids: [3be694c3-4762-4234-83f2-97aaefc89901] Status: Success Created: May 2, 2020 3:56:46 PM CEST Message: Task Name Start Time End Time Status ---------------------------------------- ----------------------------------- ----------------------------------- ---------- Setting up ssh equivalance May 2, 2020 3:56:47 PM CEST May 2, 2020 3:56:47 PM CEST Success Database Upgrade May 2, 2020 3:56:47 PM CEST May 2, 2020 4:28:36 PM CEST Success Copy Pwfile to Shared Storage May 2, 2020 4:29:41 PM CEST May 2, 2020 4:29:46 PM CEST Success Database Upgrade Validation May 2, 2020 4:29:47 PM CEST May 2, 2020 4:29:47 PM CEST Success # look in dcs-agent.log for any issues, mine pointed to some preupgrade_fixups that coulnd't be executed. # /u01/app/oracle/cfgtoollogs/dbua/upgrade2020<>/<db>/preupgrade_fixups.sql |
3.4 Remove 18c grid home
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
#https://docs.oracle.com/en/database/oracle/oracle-database/18/cwwin/removing-oracle-clusterware-and-oracle-asm-software.html#GUID-2E0E28BB-A0C7-4743-9CC1-EA93BFE7B183 # both nodes ps -ef | grep /u01/app/18.0.0.0/grid/ fuser /u01/app/18.0.0.0/grid/ lsof /u01/app/18.0.0.0/grid/ chmod -R 755 /u01/app/18.0.0.0/grid/ chown -R grid:oinstall /u01/app/18.0.0.0/grid/ ls -l /u01/app/oraInventory/locks # for some reason both nodes weren't listed while doing deinstall checkonly, so I had to do it on both nodes. (normally you add -local for only local node) # node1 su - grid cd /u01/app/18.0.0.0/grid/deinstall/ ./deinstall -checkonly # review output ./deinstall |
Output:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 |
[root@by3aaa ~]# chmod -R 755 /u01/app/18.0.0.0/grid/ [root@by3aaa ~]# chown -R grid:oinstall /u01/app/18.0.0.0/grid/ [root@by3aaa ~]# ls -l /u01/app/oraInventory/locks total 0 -rw-rw---- 1 oracle oinstall 0 May 2 20:04 inventory.lock [root@by3aaa ~]# rm -rf /u01/app/oraInventory/locks [root@by3aaa ~]# su - grid Last login: Sat May 2 20:49:29 CEST 2020 on pts/2 [grid@by3aaa ~]$ cd /u01/app/18.0.0.0/grid/deinstall/ [grid@by3aaa deinstall]$ ./deinstall -checkonly Checking for required files and bootstrapping ... Please wait ... Location of logs /u01/app/oraInventory/logs/ ############ ORACLE DECONFIG TOOL START ############ ######################### DECONFIG CHECK OPERATION START ######################### ## [START] Install check configuration ## Remote nodes on which old homes will be deleted: Checking for existence of the Oracle home location /u01/app/18.0.0.0/grid Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster Oracle Base selected for deinstall is: /u01/app/grid Checking for existence of central inventory location /u01/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home /u01/app/19.0.0.0/grid The following nodes are part of this cluster: by3aaa,by3aab Checking for sufficient temp space availability on node(s) : 'by3aaa' ## [END] Install check configuration ## Traces log file: /u01/app/oraInventory/logs//crsdc_2020-05-02_08-51-22-PM.log Network Configuration check config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2020-05-02_08-51-22PM.log Network Configuration check config END Asm Check Configuration START ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2020-05-02_08-51-22PM.log ASM was not detected in the Oracle Home Database Check Configuration START Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2020-05-02_08-51-22PM.log Oracle Grid Management database was not found in this Grid Infrastructure home Database Check Configuration END ######################### DECONFIG CHECK OPERATION END ######################### ####################### DECONFIG CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: /u01/app/19.0.0.0/grid The following nodes are part of this cluster: by3aaa,by3aab The cluster node(s) on which the Oracle home deinstallation will be performed are:by3aaa Oracle Home selected for deinstall is: /u01/app/18.0.0.0/grid Inventory Location where the Oracle home registered is: /u01/app/oraInventory The home being deconfigured is NOT a configured Grid Infrastructure home (/u01/app/19.0.0.0/grid) ASM was not detected in the Oracle Home Oracle Grid Management database was not found in this Grid Infrastructure home Location of response file generated: '/tmp/deinstall2020-05-02_08-51-01PM/response/deinstall_OraGrid18000.rsp' A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2020-05-02_08-51-15-PM.out' Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2020-05-02_08-51-15-PM.err' ############# ORACLE DECONFIG TOOL END ############# [grid@by3aaa deinstall]$ ./deinstall Checking for required files and bootstrapping ... Please wait ... Location of logs /u01/app/oraInventory/logs/ ############ ORACLE DECONFIG TOOL START ############ ######################### DECONFIG CHECK OPERATION START ######################### ## [START] Install check configuration ## Remote nodes on which old homes will be deleted: Checking for existence of the Oracle home location /u01/app/18.0.0.0/grid Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster Oracle Base selected for deinstall is: /u01/app/grid Checking for existence of central inventory location /u01/app/oraInventory Checking for existence of the Oracle Grid Infrastructure home /u01/app/19.0.0.0/grid The following nodes are part of this cluster: by3aaa,by3aab Checking for sufficient temp space availability on node(s) : 'by3aaa' ## [END] Install check configuration ## Traces log file: /u01/app/oraInventory/logs//crsdc_2020-05-02_08-53-34-PM.log Network Configuration check config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2020-05-02_08-53-34PM.log Network Configuration check config END Asm Check Configuration START ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2020-05-02_08-53-34PM.log ASM was not detected in the Oracle Home Database Check Configuration START Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2020-05-02_08-53-34PM.log Oracle Grid Management database was not found in this Grid Infrastructure home Database Check Configuration END ######################### DECONFIG CHECK OPERATION END ######################### ####################### DECONFIG CHECK OPERATION SUMMARY ####################### Oracle Grid Infrastructure Home is: /u01/app/19.0.0.0/grid The following nodes are part of this cluster: by3aaa,by3aab The cluster node(s) on which the Oracle home deinstallation will be performed are:by3aaa Oracle Home selected for deinstall is: /u01/app/18.0.0.0/grid Inventory Location where the Oracle home registered is: /u01/app/oraInventory The home being deconfigured is NOT a configured Grid Infrastructure home (/u01/app/19.0.0.0/grid) ASM was not detected in the Oracle Home Oracle Grid Management database was not found in this Grid Infrastructure home Do you want to continue (y - yes, n - no)? [n]: y A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2020-05-02_08-53-27-PM.out' Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2020-05-02_08-53-27-PM.err' ######################## DECONFIG CLEAN OPERATION START ######################## Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2020-05-02_08-53-34PM.log ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2020-05-02_08-53-34PM.log ASM Clean Configuration END Network Configuration clean config START Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2020-05-02_08-53-34PM.log De-configuring Naming Methods configuration file... Naming Methods configuration file de-configured successfully. De-configuring backup files... Backup files de-configured successfully. The network configuration has been cleaned up successfully. Network Configuration clean config END ######################### DECONFIG CLEAN OPERATION END ######################### ####################### DECONFIG CLEAN OPERATION SUMMARY ####################### There is no Oracle Grid Management database to de-configure in this Grid Infrastructure home The home being deconfigured is NOT a configured Grid Infrastructure home (/u01/app/19.0.0.0/grid) Oracle Clusterware was successfully unlocked on node "by3aaa". ####################################################################### ############# ORACLE DECONFIG TOOL END ############# Using properties file /tmp/deinstall2020-05-02_08-53-22PM/response/deinstall_2020-05-02_08-53-27-PM.rsp Location of logs /u01/app/oraInventory/logs/ ############ ORACLE DEINSTALL TOOL START ############ ####################### DEINSTALL CHECK OPERATION SUMMARY ####################### A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2020-05-02_08-53-27-PM.out' Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2020-05-02_08-53-27-PM.err' ######################## DEINSTALL CLEAN OPERATION START ######################## ## [START] Preparing for Deinstall ## Setting LOCAL_NODE to by3aaa Setting CLUSTER_NODES to by3aaa Setting CRS_HOME to false Setting oracle.installer.invPtrLoc to /tmp/deinstall2020-05-02_08-53-22PM/oraInst.loc Setting oracle.installer.local to false ## [END] Preparing for Deinstall ## Setting the force flag to false Setting the force flag to cleanup the Oracle Base Oracle Universal Installer clean START Detach Oracle home '/u01/app/18.0.0.0/grid' from the central inventory on the local node : Done Failed to delete the directory '/u01/app/18.0.0.0/grid'. Either user has no permission to delete or it is in use. Delete directory '/u01/app/18.0.0.0/grid' on the local node : Failed <<<< The Oracle Base directory '/u01/app/grid' will not be removed on local node. The directory is in use by Oracle Home '/u01/app/19.0.0.0/grid'. Oracle Universal Installer cleanup was successful. Oracle Universal Installer clean END ## [START] Oracle install clean ## ## [END] Oracle install clean ## ######################### DEINSTALL CLEAN OPERATION END ######################### ####################### DEINSTALL CLEAN OPERATION SUMMARY ####################### Successfully detached Oracle home '/u01/app/18.0.0.0/grid' from the central inventory on the local node. Failed to delete directory '/u01/app/18.0.0.0/grid' on the local node due to error : Either user has no permission to delete or file is in use. Review the permissions and manually delete '/u01/app/18.0.0.0/grid' on local node. Oracle Universal Installer cleanup was successful. Review the permissions and contents of '/u01/app/grid' on nodes(s) 'by3aaa'. If there are no Oracle home(s) associated with '/u01/app/grid', manually delete '/u01/app/grid' and its contents. Oracle deinstall tool successfully cleaned up temporary directories. ####################################################################### ############# ORACLE DEINSTALL TOOL END ############# [grid@by3aaa deinstall]$ ls -l /u01/app/18.0.0.0/grid total 0 [grid@by3aaa deinstall]$ ls -l /u01/app/18.0.0.0/ total 4 drwxr-xr-x 2 grid oinstall 4096 May 2 20:55 grid [grid@by3aaa deinstall]$ |
3.5 bugs and patches
Updated on 16-June-2020
-
- ODA TYPE CAN’T BE READ FROM SGA BECAUSE ERRORS INSIDE OAK CALLS
After upgrading my database to 19.6 I saw a lot of tracefiles being generated with the above message. (edit: fixed in ODA 19.7)
The full log is as follows
12345678910111213141516171819202122232425262728Trace file /u01/app/oracle/diag/rdbms/DEV/DEV2/trace/DEV2_j002_20879.trcOracle Database 19c Enterprise Edition Release 19.0.0.0.0 - ProductionVersion 19.6.0.0.0Build label: RDBMS_19.3.0.0.0DBRU_LINUX.X64_190417ORACLE_HOME: /u01/app/oracle/product/19.0.0.0/dbhome_1System name: LinuxNode name: by3aabRelease: 4.14.35-1902.11.3.1.el7uek.x86_64Version: #2 SMP Sat Mar 14 20:57:52 PDT 2020Machine: x86_64Instance name: DEV2Redo thread mounted by this instance: 2Oracle process number: 0Unix process pid: 20879, image:*** 2020-05-07T22:00:02.486098+02:00kgfmGetOdaType: ODA type can't be read from SGA because errors inside OAK callskgfmGetOdaType: ODA type can't be read from SGA because errors inside OAK callskgfmGetOdaType: ODA type can't be read from SGA because errors inside OAK callskgfmGetOdaType: ODA type can't be read from SGA because errors inside OAK callskgfmGetOdaType: ODA type can't be read from SGA because errors inside OAK callskgfmGetOdaType: ODA type can't be read from SGA because errors inside OAK callskgfmGetOdaType: ODA type can't be read from SGA because errors inside OAK callskgfmGetOdaType: ODA type can't be read from SGA because errors inside OAK callskgfmGetOdaType: ODA type can't be read from SGA because errors inside OAK callskgfmGetOdaType: ODA type can't be read from SGA because errors inside OAK callsSCHED 05-07 22:14:24.411 2 00 20879 J002 0(kkjspub):Slave was not marked as idle
* make sure grid users can do passwordless authentication - asr doesn’t start due to wrong java home (cfr Auto Service Request (ASR) Manager Health Check (Doc ID 2205946.1))
As a test you can executesystemctl status asra
123456789101112131415161718192021222324252627282930313233343536373839404142434445[root@by3aaa ~]# systemctl status asra● asra.service - LSB: Auto Service Request ManagerLoaded: loaded (/etc/rc.d/init.d/asra; bad; vendor preset: disabled)Active: active (exited) since Sat 2020-05-02 05:16:15 CEST; 1 weeks 3 days agoDocs: man:systemd-sysv-generator(8)Tasks: 0May 02 05:16:15 by3aaa systemd[1]: Starting LSB: Auto Service Request Manager...May 02 05:16:15 by3aaa asra[5818]: ****************************************************************May 02 05:16:15 by3aaa asra[5818]: JAVA is not found. Please set 'java.exec' property in fileMay 02 05:16:15 by3aaa asra[5818]: /var/opt/asrmanager/configuration/asr.confMay 02 05:16:15 by3aaa asra[5818]: to point to JAVA 1.8 or later and try againMay 02 05:16:15 by3aaa asra[5818]: ****************************************************************May 02 05:16:15 by3aaa systemd[1]: Started LSB: Auto Service Request Manager.[root@by3aaa ~]# vi /var/opt/asrmanager/configuration/asr.conf[root@by3aaa ~]# cat /var/opt/asrmanager/configuration/asr.confjava.exec=/usr/java/jdk1.8.0_231-amd64/jre/bin/java#java.exec=/usr/java/jdk1.8.0_181-amd64/jre/bin/java[root@by3aaa log]# systemctl status asra● asra.service - LSB: Auto Service Request ManagerLoaded: loaded (/etc/rc.d/init.d/asra; bad; vendor preset: disabled)Active: active (exited) since Sat 2020-05-02 05:16:15 CEST; 1 weeks 3 days agoDocs: man:systemd-sysv-generator(8)Tasks: 0May 02 05:16:15 by3aaa systemd[1]: Starting LSB: Auto Service Request Manager...May 02 05:16:15 by3aaa asra[5818]: ****************************************************************May 02 05:16:15 by3aaa asra[5818]: JAVA is not found. Please set 'java.exec' property in fileMay 02 05:16:15 by3aaa asra[5818]: /var/opt/asrmanager/configuration/asr.confMay 02 05:16:15 by3aaa asra[5818]: to point to JAVA 1.8 or later and try againMay 02 05:16:15 by3aaa asra[5818]: ****************************************************************May 02 05:16:15 by3aaa systemd[1]: Started LSB: Auto Service Request Manager.[root@by3aaa ~]# systemctl stop asra[root@by3aaa ~]# systemctl start asra[root@by3aaa ~]# systemctl status asra● asra.service - LSB: Auto Service Request ManagerLoaded: loaded (/etc/rc.d/init.d/asra; bad; vendor preset: disabled)Active: active (running) since Tue 2020-05-12 13:42:13 CEST; 1s agoDocs: man:systemd-sysv-generator(8)Process: 34502 ExecStop=/etc/rc.d/init.d/asra stop (code=exited, status=0/SUCCESS)Process: 34664 ExecStart=/etc/rc.d/init.d/asra start (code=exited, status=0/SUCCESS)Tasks: 2CGroup: /system.slice/asra.service├─34716 /bin/sh /opt/asrmanager/bin/asrautoupdate ASRAuto└─34981 sleep 10May 12 13:42:13 by3aaa systemd[1]: Starting LSB: Auto Service Request Manager...May 12 13:42:13 by3aaa systemd[1]: Started LSB: Auto Service Request Manager.systemctl status asrm
12345678910111213141516171819202122232425262728293031[root@by3aaa ~]# systemctl status asrm● asrm.service - LSB: Auto Service Request ManagerLoaded: loaded (/etc/rc.d/init.d/asrm; bad; vendor preset: disabled)Active: inactive (dead) since Tue 2020-05-12 13:24:11 CEST; 1h 15min agoDocs: man:systemd-sysv-generator(8)Process: 82583 ExecStop=/etc/rc.d/init.d/asrm stop (code=exited, status=0/SUCCESS)May 02 05:16:15 by3aaa su[5874]: (to root) root on noneMay 02 05:16:15 by3aaa asrm[5819]: ****************************************************************May 02 05:16:15 by3aaa asrm[5819]: JAVA is not found. Please set 'java.exec' property in fileMay 02 05:16:15 by3aaa asrm[5819]: /var/opt/asrmanager/configuration/asr.confMay 02 05:16:15 by3aaa asrm[5819]: to point to JAVA 1.8 or later and try againMay 02 05:16:15 by3aaa asrm[5819]: ****************************************************************May 02 05:16:15 by3aaa systemd[1]: Started LSB: Auto Service Request Manager.May 12 13:24:11 by3aaa systemd[1]: Stopping LSB: Auto Service Request Manager...May 12 13:24:11 by3aaa asrm[82583]: ASR Manager is stopped.May 12 13:24:11 by3aaa systemd[1]: Stopped LSB: Auto Service Request Manager.[root@by3aaa ~]# systemctl stop asrm[root@by3aaa ~]# systemctl start asrm[root@by3aaa ~]# systemctl status asrm● asrm.service - LSB: Auto Service Request ManagerLoaded: loaded (/etc/rc.d/init.d/asrm; bad; vendor preset: disabled)Active: active (exited) since Tue 2020-05-12 14:39:28 CEST; 4s agoDocs: man:systemd-sysv-generator(8)Process: 82583 ExecStop=/etc/rc.d/init.d/asrm stop (code=exited, status=0/SUCCESS)Process: 39601 ExecStart=/etc/rc.d/init.d/asrm start (code=exited, status=0/SUCCESS)May 12 14:39:27 by3aaa systemd[1]: Starting LSB: Auto Service Request Manager...May 12 14:39:27 by3aaa su[39602]: (to root) root on noneMay 12 14:39:27 by3aaa asrm[39601]: ASR Manager (pid 82968) is already RUNNING.May 12 14:39:28 by3aaa systemd[1]: Started LSB: Auto Service Request Manager.
- ODA TYPE CAN’T BE READ FROM SGA BECAUSE ERRORS INSIDE OAK CALLS
Interested in hearing more about the Oracle Database Appliance?
Contact us here or download the whitepaper below to get more info.
Download the ODA whitepaper