1. Home
  2. Knowledge Base
  3. Database Upgrade
  4. Oracle 19c Grid Infrastructure Upgrade
  1. Home
  2. Knowledge Base
  3. Real Application Clusters
  4. Oracle 19c Grid Infrastructure Upgrade

Oracle 19c Grid Infrastructure Upgrade

This note describes the process used to upgrade a two-node Oracle 12c Release 2 Grid Infrastructure environment to Oracle 19c on Linux OEL7.

The upgrade from 12c to 19c is performed in a rolling fashion using batches which will limit the application downtime.

Create the directory structure for Oracle 19c Grid Infrastructure and unzip the software
 

[grid@rac01 bin]$ cd /u02/app

[grid@rac01 app]$ mkdir 19.3.0

[grid@rac01 app]$ cd 19.3.0/

[grid@rac01 19.3.0]$ mkdir grid

[grid@rac01 19.3.0]$ cd grid 

[grid@rac01 grid]$ unzip -q /media/sf_software/LINUX.X64_193000_grid_home.zip

 
Install the packages kmod and kmod-libs
 

[root@rac01 etc]# yum install kmod
[root@rac01 etc]# yum install kmod-libs

 
Check current Oracle Clusterware installation readiness for upgrades using Cluster Verification Utility (CVU)
 
From the 19c Grid Infrastructure home execute:

./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/12.2.0/grid -dest_crshome /u02/app/19.3.0/grid -dest_version 19.0.0.0.0 -fixup -verbose

 
Update opatch version and apply patches 28553832 and 27006180
 

[root@rac01 grid]# /u01/app/12.2.0/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.17
OPatch succeeded.


[root@rac01 grid]# cd /media/sf_software/p27006180_122010_Linux-x86-64
[root@rac01 p27006180_122010_Linux-x86-64]# cd 27006180/
[root@rac01 27006180]# /u02/app/12.2.0/grid/OPatch/opatchauto apply 

OPatchauto session is initiated at Sun May 26 22:05:38 2019

System initialization log file is /u02/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-05-26_10-05-42PM.log.

Session log file is /u02/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-26_10-06-00PM.log
The id for this session is GLUN

Executing OPatch prereq operations to verify patch applicability on home /u02/app/12.2.0/grid
Patch applicability verified successfully on home /u02/app/12.2.0/grid


Bringing down CRS service on home /u02/app/12.2.0/grid
Prepatch operation log file location: /u02/app/grid/crsdata/rac01/crsconfig/crspatch_rac01_2019-05-26_10-06-40PM.log
CRS service brought down successfully on home /u02/app/12.2.0/grid


Start applying binary patch on home /u02/app/12.2.0/grid
Binary patch applied successfully on home /u02/app/12.2.0/grid


Starting CRS service on home /u02/app/12.2.0/grid
Postpatch operation log file location: /u02/app/grid/crsdata/rac01/crsconfig/crspatch_rac01_2019-05-26_10-11-48PM.log
CRS service started successfully on home /u02/app/12.2.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:rac01
CRS Home:/u02/app/12.2.0/grid
Version:12.2.0.1.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /media/sf_software/p27006180_122010_Linux-x86-64/27006180/27006180
Log: /u02/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-26_22-09-14PM_1.log



OPatchauto session completed at Sun May 26 22:17:35 2019
Time taken to complete the session 11 minutes, 58 seconds
[root@rac01 27006180]# 



[root@rac01 28553832]# cd 28553832/
[root@rac01 28553832]# /u02/app/12.2.0/grid/OPatch/opatchauto apply

OPatchauto session is initiated at Sun May 26 23:11:04 2019

System initialization log file is /u02/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-05-26_11-11-07PM.log.

Session log file is /u02/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-26_11-11-24PM.log
The id for this session is QQTG

Executing OPatch prereq operations to verify patch applicability on home /u02/app/12.2.0/grid
Patch applicability verified successfully on home /u02/app/12.2.0/grid


Bringing down CRS service on home /u02/app/12.2.0/grid
Prepatch operation log file location: /u02/app/grid/crsdata/rac01/crsconfig/crspatch_rac01_2019-05-26_11-11-46PM.log
CRS service brought down successfully on home /u02/app/12.2.0/grid


Start applying binary patch on home /u02/app/12.2.0/grid
Binary patch applied successfully on home /u02/app/12.2.0/grid


Starting CRS service on home /u02/app/12.2.0/grid
Postpatch operation log file location: /u02/app/grid/crsdata/rac01/crsconfig/crspatch_rac01_2019-05-26_11-13-24PM.log
CRS service started successfully on home /u02/app/12.2.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:rac01
CRS Home:/u02/app/12.2.0/grid
Version:12.2.0.1.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/app/28553832/28553832
Log: /u02/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-26_23-12-42PM_1.log

OPatchauto session completed at Sun May 26 23:18:51 2019
Time taken to complete the session 7 minutes, 48 seconds
[root@rac01 28553832]# 

 
 

Start the 19c Grid Infrastructure rolling upgrade
 
[grid@rac01 grid]$ cd /u02/app/19.3.0/grid

[grid@rac01 grid]$ ./gridSetup.sh
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 
We select different batches here because between when we move from Batch 1 to Batch 2, we can move services from the node currently still running the previous release to the upgraded node, so that services are not affected by the upgrade process.
 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 
We can see that when the root.sh is being run on node rac01, cluster services are still up and running on node rac02.

Upgrade of cluster services on rac02 will be performed as part of Batch 2.
 

[root@rac02 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[root@rac01 ~]# cd /u01/app/12.2.0/grid/bin
[root@rac01 bin]# ./crsctl check crs 
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
[root@rac01 bin]# 
 


 
 
We can see that the cluster is now in ROLLING UPGRADE mode.
 

[root@rac02 bin]# ./crsctl query crs softwareversion -all
Oracle Clusterware version on node [rac01] is [19.0.0.0.0]
Oracle Clusterware version on node [rac02] is [12.2.0.1.0]

[root@rac02 bin]# ./crsctl query crs activeversion -f 
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [ROLLING UPGRADE]. The cluster active patch level is [695302969].

 

 
 

 
 

 
 
Upgrade is now completed!
 

[root@rac02 bin]# ./crsctl query crs softwareversion -all
Oracle Clusterware version on node [rac01] is [19.0.0.0.0]
Oracle Clusterware version on node [rac02] is [19.0.0.0.0]

[root@rac02 bin]# ./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]

 

Updated on June 2, 2021

Was this article helpful?

Related Articles

Comments

  1. Hi Gavin
    Did you face any issue with upgrade to 19.3 . I wanted to upgrade my 12.1.0.2 to 19.3 . I found one issue with that and have bug fix(FOR ASM ISSUE while running the rootupgrade.sh) . Just wondering any other issues
    Thanks

Leave a Comment