Friday, April 29, 2016

ORA-01565: error in identifying file '+DATA/cdbrac/spfilecdbrac.ora'

[oracle@rac1 ~]$ srvctl start database -d cdbrac
PRCR-1079 : Failed to start resource ora.cdbrac.db
CRS-5017: The resource action "ora.cdbrac.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/cdbrac/spfilecdbrac.ora'
ORA-17503: ksfdopn:2 Failed to open file +DATA/cdbrac/spfilecdbrac.ora
ORA-15056: additional error message
ORA-17503: ksfdopn:2 Failed to open file +DATA/cdbrac/spfilecdbrac.ora
ORA-15173: entry 'spfilecdbrac.ora' does not exist in directory 'cdbrac'
ORA-06512: at line 4
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/rac1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.cdbrac.db' on 'rac1' failed
CRS-5017: The resource action "ora.cdbrac.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/cdbrac/spfilecdbrac.ora'
ORA-17503: ksfdopn:2 Failed to open file +DATA/cdbrac/spfilecdbrac.ora
ORA-15056: additional error message
ORA-17503: ksfdopn:2 Failed to open file +DATA/cdbrac/spfilecdbrac.ora
ORA-15173: entry 'spfilecdbrac.ora' does not exist in directory 'cdbrac'
ORA-06512: at line 4
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/rac2/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.cdbrac.db' on 'rac2' failed
CRS-2632: There are no more servers to try to place resource 'ora.cdbrac.db' on that would satisfy its placement policy
[oracle@rac1 ~]$


Do not worry log on to asm and locate where the spfile is

ASMCMD> cd +DATA/CDBRAC/PARAMETERFILE/
ASMCMD> ls -l
Type           Redund  Striped  Time             Sys  Name
PARAMETERFILE  UNPROT  COARSE   APR 29 13:00:00  Y    spfile.271.909835045
ASMCMD>

Now you want this under +DATA/CDBRAC as per the error so create an alias in ASM

ASMCMD> pwd
+DATA/CDBRAC
ASMCMD>
ASMCMD> mkalias +DATA/cdbrac/PARAMETERFILE/spfile.271.909835045 spfilecdbrac.ora
ASMCMD> ls -l
Type           Redund  Striped  Time             Sys  Name
                                                 Y    310A14D691145E69E0536538A8C0D511/
                                                 Y    310A3223A63F6D46E0536538A8C02A10/
                                                 Y    CONTROLFILE/
                                                 Y    DATAFILE/
                                                 Y    FD9AC20F64D244D7E043B6A9E80A2F2F/
                                                 Y    ONLINELOG/
                                                 Y    PARAMETERFILE/
                                                 Y    PASSWORD/
                                                 Y    TEMPFILE/
PARAMETERFILE  UNPROT  COARSE   APR 29 13:00:00  N    spfilecdbrac.ora => +DATA/CDBRAC/PARAMETERFILE/spfile.271.909835045
ASMCMD>

[oracle@rac1 ~]$ srvctl start database -d cdbrac
[oracle@rac1 ~]$

Worked like a charm !!!

Thursday, April 28, 2016

PRVF-0002 : could not retrieve local node name

[oracle@racdr1 grid]$ ./runcluvfy.sh  stage -pre crsinst -n racdr1 -verbose

WARNING:
Could not access or create trace file path "/tmp/bootstrap/cv/log". Trace information could not be collected

PRVF-0002 : could not retrieve local node name
[oracle@racdr1 grid]$ hostname
racdr1.localdomain
[oracle@racdr1 grid]$

Getting above error while doing the cluster verify for oracle 12c GI.

For diagnosing I have done

[oracle@racdr1 grid]$ hostname --long
hostname: Unknown host
[oracle@racdr1 grid]$
[oracle@racdr1 grid]$
[oracle@racdr1 grid]$ hostname --long --verbose
gethostname()=`racdr1.localdomain'
Resolving `racdr1.localdomain' ...
hostname: Unknown host
[oracle@racdr1 grid]$

It should not say "Unknown host"

Lets check /etc/hosts file

[root@racdr1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
[root@racdr1 ~]#


what it turned out was the my dns setting were wrong basically once you change in /etc/resolv.conf then do the following so that it does not change after reboot.

[root@racdr1 ~]# chattr -i /etc/resolv.conf
update details and then
[root@racdr1 ~]# chattr +i /etc/resolv.conf


[oracle@racdr1 grid]$ hostname --long
racdr1.localdomain
[oracle@racdr1 grid]$
[oracle@racdr1 grid]$ hostname --long --verbose
gethostname()=`racdr1.localdomain'
Resolving `racdr1.localdomain' ...
Result: h_name=`racdr1.localdomain'
Result: h_addr_list=`192.168.56.111'
racdr1.localdomain
[oracle@racdr1 grid]$

Hurray !!!

GUI is not working on Linux RedHat 6.7 and 6.8 after yum update machine on virtual box

I have recently done yum update on my virtual machine running from Virtual box with RedHat Linux 6.7 and 6.8 .

The out come was as below, GUI was blank.


Looks like some issues in X. The solution is simple just move the xorg.conf file as below and restart the virtual machine.

[root@racdr1 ~]# ls -lah /etc/X11/xorg.conf
-rw-r--r--. 1 root root 534 Apr 22 16:07 /etc/X11/xorg.conf
[root@racdr1 ~]#
[root@racdr1 ~]#
[root@racdr1 ~]#
[root@racdr1 ~]# cat /etc/X11/xorg.conf
# VirtualBox generated configuration file
# based on /etc/X11/xorg.conf.

Section "Monitor"
  Identifier   "Monitor[0]"
  ModelName    "VirtualBox Virtual Output"
  VendorName   "Oracle Corporation"
EndSection

Section "Device"
  BoardName    "VirtualBox Graphics"
  Driver       "vboxvideo"
  Identifier   "Device[0]"
  VendorName   "Oracle Corporation"
EndSection

Section "Screen"
  SubSection "Display"
    Depth      24
  EndSubSection
  Device       "Device[0]"
  Identifier   "Screen[0]"
  Monitor      "Monitor[0]"
EndSection
[root@racdr1 ~]#
[root@racdr1 ~]#



[root@racdr1 ~]# mv /etc/X11/xorg.conf /etc/X11/xorg.conf.old
[root@racdr1 ~]# shutdown -r now



Friday, April 22, 2016

Oracle remap_tablespace

There are instances in DBA's life when we have to perform and export and import of some data. While doing the import we want to move the data or new tablespace this can be done by remap_tablespace parameter in impdp.

Example:

Suppose there is a database DB1 which has a table ITEM owned by USER1 but the tablespace for this table is MYUSERS . Which means that location where this table is stored physically is MYUSERS tablespace.

Database: DB1
TableName: ITEM
Owner of table: USER1
Tablespace of table: MYUSERS

Now you want to perform the export of that table and import into another database NEWDB under MYNEWUSERS tablespace.

This can be done as below:

  • Perform the export as usual.
  • While doing the import include this parameter EMAP_TABLESPACE=MYUSER:MYNEWUSERS


Outcome will be
Table Name: ITEM
Owner : USER1
Tablespace under NEWDB: MYNEWUSERS





Thursday, April 21, 2016

Cannot verify the shared state for device /dev/oracleasm/disks/SYSCFG1 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes

While doing the installation of oracle 12GI on redhat I was getting a warning in the installation.

Please note that I am using Oracle Virtual Box

The details are: 5.0.18r106667
Oracle: 12.0.1.2
Linux: Redhat 6.3 x86_64

Please note that it was not showing the candidates properly I had to manually add the path





Device Checks for ASM - This is a prerequisite check to verify that the specified devices meet the requirements for ASM.
  Check Failed on Nodes: [rac2,  rac1]
Verification result of failed node: rac2
Expected Value
 : cvuqdisk-1.0.9-1
Actual Value
 : cvuqdisk-1.0.9-1
 Details:
 -
Cannot verify the shared state for device /dev/oracleasm/disks/SYSCFG3 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes: [rac1, rac2]  - Cause: Cause Of Problem Not Available  - Action: User Action Not Available
 -
Cannot verify the shared state for device /dev/oracleasm/disks/SYSCFG2 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes: [rac1, rac2]  - Cause: Cause Of Problem Not Available  - Action: User Action Not Available
 -
Cannot verify the shared state for device /dev/oracleasm/disks/SYSCFG1 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes: [rac1, rac2]  - Cause: Cause Of Problem Not Available  - Action: User Action Not Available
 -
PRVF-9802 : Attempt to get 'udev' information from node "rac2" failed No UDEV rule found for device(s) specified  - Cause:  An attempt to read the ''udev'' permission or rule file failed, or the permission or rule file did not contain any rules for the specified device or devices.  - Action:  Make sure that the ''udev'' permission or rule directory is created, the ''udev'' permission or rule file is available and accessible by the user running the check and that the permission or rule file contains the correct ''udev'' rule for the device or devices being checked.
Back to Top
Verification result of failed node: rac1
Expected Value
 : cvuqdisk-1.0.9-1
Actual Value
 : cvuqdisk-1.0.9-1
 Details:
 -
Cannot verify the shared state for device /dev/oracleasm/disks/SYSCFG3 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes: [rac1, rac2]  - Cause: Cause Of Problem Not Available  - Action: User Action Not Available
 -
Cannot verify the shared state for device /dev/oracleasm/disks/SYSCFG2 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes: [rac1, rac2]  - Cause: Cause Of Problem Not Available  - Action: User Action Not Available
 -
Cannot verify the shared state for device /dev/oracleasm/disks/SYSCFG1 due to Universally Unique Identifiers (UUIDs) not being found, or different values being found, for this device across nodes: [rac1, rac2]  - Cause: Cause Of Problem Not Available  - Action: User Action Not Available
 -
PRVF-9802 : Attempt to get 'udev' information from node "rac1" failed No UDEV rule found for device(s) specified  - Cause:  An attempt to read the ''udev'' permission or rule file failed, or the permission or rule file did not contain any rules for the specified device or devices.  - Action:  Make sure that the ''udev'' permission or rule directory is created, the ''udev'' permission or rule file is available and accessible by the user running the check and that the permission or rule file contains the correct ''udev'' rule for the device or devices being checked.


Why it was occurring:
For some reason oracleasm was not able to see the disk consistently across nodes.

Fix: In order to fix it have done below:

yum update -y

and new version of linux is now "Red Hat Enterprise Linux Server release 6.7 (Santiago)

Made sure that below is installed across nodes for oracleasm

kmod-oracleasm-2.0.8-6.el6_7.x86_64
oracleasmlib-2.0.4-1.el6.x86_64.rpm from here 
oracleasm-support-2.1.8-1.el6.x86_64.rpm from here

On Virtual box all the asm disks are SATA and shared.

After these steps I could see GI install able to see the candidate disks


below is how the VM is configured in Virtual Box





Wednesday, April 20, 2016

Simple steps to re-use the disk for asm using oracleasm

log in as root

dd if=/dev/zero of=/dev/sdb1 bs=1024 count=100

oracleasm deletedisk ASM1

oracleasm createdisk ASM1 /dev/sdb1

oracleasm scandisks

oracleasm listdisks



PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_UID" on the node "XXXXXX" does not match with cluster nodes

Checking ASMLib configuration.
  Node Name                             Status                
  ------------------------------------  ------------------------
  rac1                                  (failed) ASMLib configuration is incorrect.
  rac2                                  (failed) ASMLib configuration is incorrect.

ERROR:
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_UID" on the node "rac1" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_GID" on the node "rac1" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_ENABLED" on the node "rac1" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_UID" on the node "rac2" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_GID" on the node "rac2" does not match with cluster nodes
PRVG-10122 : ASMLib configuration value set to configuration parameter "ORACLEASM_ENABLED" on the node "rac2" does not match with cluster nodes
Result: Check for ASMLib configuration failed.


There are few things we can do. First is run oracleasm configure on both nodes and see what are you getting.

In my case I was getting same results, not much help !

[root@rac1 ~]# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=oracle
ORACLEASM_GID=dba
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"
[root@rac1 ~]#

[root@rac2 ~]# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=oracle
ORACLEASM_GID=dba
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"
[root@rac2 ~]#


Next steps is cat oracleasm

[root@rac1 ~]# cat /etc/sysconfig/oracleasm
#
# This is a configuration file for automatic loading of the Oracle
# Automatic Storage Management library kernel driver.  It is generated
# By running /etc/init.d/oracleasm configure.  Please use that method
# to modify this file
#

# ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=false

# ORACLEASM_UID: Default UID owning the /dev/oracleasm mount point.
ORACLEASM_UID=

# ORACLEASM_GID: Default GID owning the /dev/oracleasm mount point.
ORACLEASM_GID=

# ORACLEASM_SCANBOOT: 'true' means fix disk perms on boot
ORACLEASM_SCANBOOT=true

# ORACLEASM_USE_LOGICAL_BLOCK_SIZE: 'true' means use the logical block
# size reported by the underlying disk instead of the physical. The
# default is 'false'
ORACLEASM_USE_LOGICAL_BLOCK_SIZE=false
[root@rac1 ~]#

[root@rac2 ~]# cat /etc/sysconfig/oracleasm
#
# This is a configuration file for automatic loading of the Oracle
# Automatic Storage Management library kernel driver.  It is generated
# By running /etc/init.d/oracleasm configure.  Please use that method
# to modify this file
#

# ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true

# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
ORACLEASM_UID=oracle

# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
ORACLEASM_GID=dba

# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
ORACLEASM_SCANBOOT=true

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER=""

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=""

# ORACLEASM_USE_LOGICAL_BLOCK_SIZE: 'true' means use the logical block size
# reported by the underlying disk instead of the physical. The default
# is 'false'
ORACLEASM_USE_LOGICAL_BLOCK_SIZE=false

[root@rac2 ~]#



Looks like node rac1 has some issues as it is missing the values.

List oracleasm under /etc/sysconfig

[root@rac1 ~]# cd /etc/sysconfig/
[root@rac1 sysconfig]#
[root@rac1 sysconfig]# ls -lah oracle*
-rw-r--r--. 1 root root 772 Feb  9  2013 oracleasm
-rw-r--r--. 1 root root 973 Apr 20 10:36 oracleasm-_dev_oracleasm
lrwxrwxrwx. 1 root root  24 Apr 20 09:29 oracleasm.rpmsave -> oracleasm-_dev_oracleasm
[root@rac1 sysconfig]#



[root@rac2 ~]#  cd /etc/sysconfig/
[root@rac2 sysconfig]#
[root@rac2 sysconfig]# ls -lah oracle*
lrwxrwxrwx. 1 root root  24 Apr 20 10:40 oracleasm -> oracleasm-_dev_oracleasm
-rw-r--r--. 1 root root 973 Apr 20 10:41 oracleasm-_dev_oracleasm
[root@rac2 sysconfig]#


Perform below:

[root@rac1 sysconfig]# mv oracleasm oracleasm.old
[root@rac1 sysconfig]# ln -s oracleasm-_dev_oracleasm oracleasm

[root@rac1 sysconfig]# ls -lah oracle*
lrwxrwxrwx. 1 root root  24 Apr 20 12:59 oracleasm -> oracleasm-_dev_oracleasm
-rw-r--r--. 1 root root 973 Apr 20 10:36 oracleasm-_dev_oracleasm
-rw-r--r--. 1 root root 772 Feb  9  2013 oracleasm.old
lrwxrwxrwx. 1 root root  24 Apr 20 09:29 oracleasm.rpmsave -> oracleasm-_dev_oracleasm
[root@rac1 sysconfig]#


[root@rac1 ~]# cat /etc/sysconfig/oracleasm
#
# This is a configuration file for automatic loading of the Oracle
# Automatic Storage Management library kernel driver.  It is generated
# By running /etc/init.d/oracleasm configure.  Please use that method
# to modify this file
#

# ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true

# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
ORACLEASM_UID=oracle

# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
ORACLEASM_GID=dba

# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
ORACLEASM_SCANBOOT=true

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER=""

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=""

# ORACLEASM_USE_LOGICAL_BLOCK_SIZE: 'true' means use the logical block size
# reported by the underlying disk instead of the physical. The default
# is 'false'
ORACLEASM_USE_LOGICAL_BLOCK_SIZE=false

[root@rac1 ~]#


we can also remove oracleasm.old and oracleasm.rpmsave

The issue is fixed now.

Tuesday, April 19, 2016

PRCD-1084 : Failed to start service tstdb_ha

Received the error while starting the database service.
[oracle@host01 (tstdbp1) ~]$ srvctl start service -d tstdbp -s tstdb_ha
PRCD-1084 : Failed to start service tstdb_ha
PRCR-1079 : Failed to start resource ora.tstdbp.tstdb_ha.svc
CRS-2800: Cannot start resource 'ora.tstdbp.db' as it is already in the INTERMEDIATE state on server 'host02'
CRS-2632: There are no more servers to try to place resource 'ora.tstdbp.tstdb_ha.svc' on that would satisfy its placement policy
CRS-2800: Cannot start resource 'ora.tstdbp.db' as it is already in the INTERMEDIATE state on server 'host01'
[oracle@host01 (tstdbp1) ~]$
When I had a look at clusterware resources I could see what was happening
[grid@host01 ~]$ crsctl stat res -t
ora.tstdbp.db
      1        ONLINE  INTERMEDIATE host01              Stuck Archiver,STABLE
      2        ONLINE  INTERMEDIATE host02              Stuck Archiver,STABLE
ora.tstdbp.tstdb_ha.svc
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  OFFLINE                               STABLE
Looks like the archiver is stuck and alert log explains it.
alert log says
ORA-19815: WARNING: db_recovery_file_dest_size of 21474836480 bytes is 100.00% used, and has 0 remaining bytes available.
added a bit of space and then ran a full backup to clear the archive logs.

You can look at show all under RMAN to see what will work for you.


Thursday, April 14, 2016

MRP0: Background Media Recovery terminated with error 742

after starting the DR database using srvctl start database, alter log showing that background Media Recovery is terminated with error 742

what I have found is that it is more like a warning rather than an actual error.

This will happen if you do the following when recovering standby database.

At command "ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;" what we are saying is cancel the standby recovery. so when you will start DR database using srvctl you will get that warning means real time apply is not happening.


SQL> startup nomount
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.
Total System Global Area 5344731136 bytes
Fixed Size                  2262656 bytes
Variable Size            4412410240 bytes
Database Buffers          922746880 bytes
Redo Buffers                7311360 bytes
SQL> alter database mount standby database;
Database altered.
SQL> alter database open read only;
Database altered.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
Database altered.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Database altered.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
[oracle@testdb01 (db1tesd) bin]$
[oracle@testdb01 (db1tesd) bin]$
[oracle@testdb01 (db1tesd) bin]$ srvctl start database -d db1tesd
[oracle@testdb01 (db1tesd) bin]$ 


To fix this and start real time apply issue the following command and test using srvctl stop and start on DR.

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
Database altered.
SQL> 

ORA-10458: standby database requires recovery

For some reason when I was trying to start me DR instance I was getting the error that need recovery for the database. Something like this:

oracle@testdb01 (db1tesd) bin]$ srvctl start database -d db1tesd
PRCR-1079 : Failed to start resource ora.db1tesd.db
CRS-5017: The resource action "ora.db1tesd.db start" encountered the following error:
ORA-10458: standby database requires recovery
ORA-01196: file 1 is inconsistent due to a failed media recovery session
ORA-01110: data file 1: '+DATA/db1tesd/datafile/system.284.881230789'
. For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0/grid/log/testdb01/agent/ohasd/oraagent_grid//oraagent_grid.log".
CRS-2674: Start of 'ora.db1tesd.db' on 'testdb01' failed
[oracle@testdb01 (db1tesd) bin]$ 

In order to fix the issue what I did was started standby in nomount

  • mount as standby
  • open database in read only
  • recovery managed standby database using current logfile 


Actual steps:
SQL> startup nomount
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance
ORACLE instance started.
Total System Global Area 5344731136 bytes
Fixed Size                  2262656 bytes
Variable Size            4412410240 bytes
Database Buffers          922746880 bytes
Redo Buffers                7311360 bytes
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT
*
ERROR at line 1:
ORA-01507: database not mounted

SQL> alter database mount standby database;
Database altered.
SQL> alter database open read only;
Database altered.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
Database altered.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Database altered.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
[oracle@testdb01 (db1tesd) bin]$
[oracle@testdb01 (db1tesd) bin]$
[oracle@testdb01 (db1tesd) bin]$ srvctl start database -d db1tesd
[oracle@testdb01 (db1tesd) bin]$ 

Or make sure that log shipping process is enabled on primary in DGMGRL by issuing below on primary


alter system set log_archive_dest_state_2=enable scope=both sid='*';

below on DR:


ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;