Monday, June 22, 2020

Restore Single Table from MySQL Dump


Summary

There are times when you need to restore a table and what you have is a MySQL dump of the entire database.  We want to be able to get a single table restored without having to restore the entire database.  Here we will cover a way to do that.

Process

1.       Get your Full MySQL Dump in a location that we can worth with that has the space we need to be able to unzip and work with the file without filling up the space.
2.       Make sure the mysql dump is not zipped/compressed if so, make sure it is uncompressed as we need it in the text format.

3.       Determine the table(s) you want to restore that exist in the mysql dump backup that you have.  Keep in mind that tables can have dependencies for foreign keys and when restoring tables to prior points in time you may need to actually restore table dependencies.

4.       First Lets extract the table(s) we want from the mysql dump to restore from the full dump into files that are just the tables.  On a Linux/unix system we can use sed to accomplish this.

Example:
sed -n -e '/CREATE TABLE.*``/,/CREATE TABLE/p'

For Example we want the cr_pads and the projects tables from the mysql dump
sed -n -e '/CREATE TABLE.*`cr_pads`/,/CREATE TABLE/p' MySQLDB_06-03-2020.sql > c_pads_06_03_2020.dump

sed -n -e '/CREATE TABLE.*`projects`/,/CREATE TABLE/p' MySQLDB_06-03-2020.sql > projects_06_03_2020.dump


5.       Now that we have the tables we want to restore out into separate dump files we need to check the file at the end for any drop/create for the next table in the dump file as our sed command extracted everything between the create table for the table we want to restore and the next create table in the file.

tail -100

** Find out how many lines are at the end of the file that we do not need, look for drop table, create table those for sure we need to remove.


6.       If the dumpfile contains drop table and create table of next table we need to remove those from the dump file.  We can do this with vi or use head command to remove the last so many lines

Example:
head --lines=-{number of lines to remove} {dumpfile} > new_dumpfile

Real Example:
head --lines=-5 projects_06_03_2020.dump > projects_06_03_2020.sql
head --lines=-5 cr_pads_06_03_2020.dump > cr_pads_06_03_2020.sql


7.       Now that we have actual table extracted we are ready to replace our table or load table into another temporary database to work with.

Example to a new temporary database:
mysql -u root -p
create databse mytemp
use mytemp
source projects_06_03_2020.sql
source cr_pads_06_03_2020.sql


To replace table in existing database, keep in mind existing state of table will be destroyed:
mysql -u root -p
use mydatabase
source projects_06_03_2020.sql
source cr_pads_06_03_2020.sql


8.       Now you have the copy of the table(s) back in the databases as of the last mysql dump.

Tuesday, May 5, 2020

Change Oracle Group UUID for ORACLE_HOME

** NOTE: This procedure while has been tested on on an Oracle Linux server and has shown to allow the change of the UUID for the HOME
This is meant to help you in a situation where a UUID mismatch has happened and correct without having to reinstall.  Keep in mind while
it has been tested and used and has worked this is not assured to work for you, but may so shown here in case it can help you.

1. copy the permission.pl script found in Appendix A of this document into the root home location on each dbatabase node in the cluster.

scp permission.pl root@nodename


2. Log into each of the database nodes as root

ssh root@nodename


3. On each database node disable crs and shutdown all database and grid infrastructure components as well as any processes being run by oracle

. oraenv
+ASM1
crsctl disable crs
crsctl stop crs


4. On each node in the cluster capture the u01 permissions and ownership prior to change of UUID using permission.pl script process as root execute for the base location for oracle installations in our example we will use /u01
** Note: Your environment may not use /u01 and may have installs in multiple places in those cases run for each location.

perl permission.pl /u01
** See Script in Appendiz A of this document

script will create a script that ends in .cmd
example: restore-perm-Fri-Mar-13-20-27-33-2020.cmd


5. Set permission on this file for execution

Example: chmod 770 restore-perm-Fri-Mar-13-20-27-33-2020.cmd


6. Change the oracle owner and or group UUIDs in /etc/passwd and/or /etc/group


7. Check that the change has taken affect


8. Run the script process create by the permission.pl execution in prior step, this will restore the permissions and ownership for Oracle binaries

Example: ./restore-perm-Fri-Mar-13-20-27-33-2020.cmd


9. Post u01 restore permissons deal with ownership and permissions outside u01
** This uses 18.8.0.0 binaries and bianry locations you will need to adjust for your database and grid home locations.

cd /home
chown -R oracle:oinstall oracle
cd /u01/app/oracle/Oeda/Software/Patches
chown -R oracle:oinstall *
cd /u01/app/oracle/product/18.0.0.0/dbhome_1/.patch_storage
chown -R oracle:oinstall *
cd /u01/app/18.0.0.0/grid/.patch_storage
chown -R oracle:oinstall *
cd /var/tmp
chown -R oracle:oinstall GridSetupActions*
chown -R oracle:oinstall InstallActions*
chown -R oracle:oinstall hsperfdata_oracle
chown -R oracle:oinstall CVU_18.0.0.0.0_resource
chown -R oracle:oinstall CVU_18.0.0.0.0_oracle
cd /etc
chown oracle:oinstall oratab
chgrp oinstall oracle
cd /etc/oracle/scls_scr/agoprdd4dbadm*/oracle/ # Adjust for the node name of each node or wild card for Rack Group
chown oracle cssfatal
cd /etc/oracle/scls_scr/agoprdd4dbadm*/ # Adjust for the node name of each node or wild card for Rack Group
chown oracle oracle
cd /etc/oracle/scls_scr/agoprdd4dbadm0*/root # Adjust for the node name of each node or wild card for Rack Group
chgrp oinstall ohasdrun
chgrp oinstall ohasdstr
cd /etc/oracle/scls_scr/
chgrp oinstall agoprdd4dbadm* # Adjust for the node name of each node or wild card for Rack Group
cd /etc/oracle/
chgrp oinstall scls_scr
chgrp oinstall oprocd
chgrp oinstall olr.loc
chgrp oinstall ocr.loc
chgrp oinstall lastgasp
chgrp oinstall maps
chgrp oinstall setasmgid
cd /etc/oracle/oprocd
chgrp oinstall *
cd /etc/oracle/maps
chown oracle:oinstall *
cd /var/tmp
chgrp oinstall .oracle
cd /var/tmp/.oracle
rm -f *


10. Now that permissions are reset we can restart the cluster and check

. oraenv
+ASM1
crsctl start crs
ps -ef | grep grid
ps -ef | grep smon
crsctl check crs


Appedix A: permissions.pl
** Script was obtain from Oracle.

#!/usr/bin/perl -w
#
# Captures file permissions and the owner of the files
# useage : perm1.pl
#
# MODIFIED
#     uaswatha   12th March 2018  address filename with spaces (request from customer)

use strict;
use warnings;
use File::Find;
use POSIX();

my (@dir) = @ARGV;
my $linecount=0 ;

#print @ARGV, $#ARGV;

if ($#ARGV < 0) {
   print "\n\nOpps....Invalid Syntax !!!!\n" ;
   print "Usage   : ./perm1.pl \n\n" ;
   print "Example : ./perm1.pl /home/oralce\n\n" ;
   exit ;
}
my $logdir=$dir[0] ;
#my ($sec, $min, $hr, $day, $mon, $year) = localtime;
##my ($dow,$mon,$date,$hr,$min,$sec,$year) = POSIX::strftime( '%a %b %d %H %M %S %Y', localtime);
my $date = POSIX::strftime( '%a-%b-%d-%H-%M-%S-%Y', localtime);
my $logfile="permission-".$date;
my $cmdfile="restore-perm-".$date.".cmd" ;

open LOGFILE, "> $logfile" or die $! ;
open CMDFILE, "> $cmdfile" or die $! ;
find(\&process_file,@dir);

print "Following log files are generated\n" ;
print "logfile      : ".$logfile. "\n" ;
print "Command file : ".$cmdfile. "\n" ;
print "Linecount : ".$linecount."\n" ;
close (LOGFILE) ;
close (CMDFILE) ;


sub process_file {
    my ($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size, $atime,$mtime,$ctime,$blksize,$blocks,$username,$user,$pass,$comment,$home,$shell,
$group);
    my %uiduname = () ;
    my %gidgname = () ;
    my $filename = $File::Find::name;


#### Building uid, username hash

    open (PASSWDFILE, '/etc/passwd') ;

    while ( ) {
       ($user,$pass,$uid,$gid,$comment,$home,$shell)=split (/:/) ;
       $uiduname{$uid}=$user ;
    }
    close (PASSWDFILE) ;


#### Building gid, groupname hash

    open (GRPFILE, '/etc/group') ;

    while ( ) {
       ($group,$pass,$gid)=split (/:/) ;
       $gidgname{$gid}=$group ;
    }
    close (GRPFILE) ;

    ($dev,$ino,$mode,$nlink,$uid,$gid,$rdev,$size, $atime,$mtime,$ctime,$blksize,$blocks) = stat("$filename");
#    printf  "%o %s %s %s\n", $mode & 07777, $uiduname{$uid}, $gidgname{$gid}, $filename ;
     printf LOGFILE "%o %s %s %s\n", $mode & 07777, $uiduname{$uid}, $gidgname{$gid}, "\"$filename\"" ;
     printf CMDFILE "%s %s%s%s %s\n", "chown ",$uiduname{$uid}, ":", $gidgname{$gid}, "\"$filename\"" ;
     printf CMDFILE "%s %o %s\n", "chmod ",$mode & 07777, "\"$filename\"" ;
        #    printf  "%o %s %s %s\n", $mode & 07777, $uiduname{$uid}, $gidgname{$gid}, \",$filename,\" ;
    $linecount++ ;
}

Update to New 18c Patch Level using New ORACLE_HOME

** Procedure Assumes new ORACLE_HOME at updated patch level.

1. shutdown any golden gate processes when Golden Gate in Use.


2. for each database that is to go to 18 Patch level and new ORACLE_HOME
. oraenv

srvctl stop database -d  


3. modify the /etc/oratab for the database instances to goto new ORACLE_HOME
Example:
/u01/app/oracle/product/18.0.0.0/dbhome_1


4. move the glogin.sql out of the way for the changes, when the glogin.sql is utilized.
mv $ORACLE_HOME/sqlplus/admin/glogin.sql $ORACLE_HOME/sqlplus/admin/glogin.sql.save


5. for each database from one node run the datapatch this will set to new ORACLE_HOME at new level setup
   Execute this on the same node where original datapatch was execute for update originally.
. oraenv

srvctl stop database -d  
cd $ORACLE_HOME/OPatch
sqlplus / as sysdba
startup nomount
alter system set cluster_database=false scope=spfile;
shutdown immediate
startup upgrade
exit
./datapatch -verbose
sqlplus / as sysdba
alter system set cluster_database=true scope=spfile;
shutdown immediate
exit


6. Change the ORACLE_HOME for the cluster config for each database
. oraenv

srvctl modify database -d -oraclehome /u01/app/oracle/product/18.0.0.0/dbhome_1


7. Restart database with new home via cluster config
srvctl start database -d


8. put the glogin back in place when glogin.sql is used.
mv $ORACLE_HOME/sqlplus/admin/glogin.sql.save $ORACLE_HOME/sqlplus/admin/glogin.sql


9. Update Golden Gate prm files for new ORACLE_HOME


10. Restart Golden Gate Processes
cd /ggs
./ggsci
start mgr
start jagent
start *

Rollback 18 Patch back to Prior Patch Level with Change ORACLE_HOME

** Procedure assumes ORACLE_HOME rolling back to is already in place at patch level you want to rollback.

1. Stop any golden gate processes when golden gate in use for database affected.


2. for each database go back to prior 18c Patch Level with different ORACLE_HOME
. oraenv

srvctl stop database -d  


3. modify the /etc/oratab for the database instances to goto new ORACLE_HOME
Example:
/u01/app/oracle/product/18.0.0.0/dbhome_2


4. move the glogin.sql out of the way for the changes, this would be for the 18 home in this case when used
mv $ORACLE_HOME/sqlplus/admin/glogin.sql $ORACLE_HOME/sqlplus/admin/glogin.sql.save



5. for each database from one node run the datapatch this will set back to prior 18 patch level
   Execute this on the same node where original datapatch was execute for update originally.
. oraenv

srvctl stop database -d  
cd $ORACLE_HOME/OPatch
sqlplus / as sysdba
startup nomount
alter system set cluster_database=false scope=spfile;
shutdown immediate
startup upgrade
exit
./datapatch -verbose
sqlplus / as sysdba
alter system set cluster_database=true scope=spfile;
shutdown immediate
exit


6. Change the ORACLE_HOME for the cluster config for each database
. oraenv

srvctl modify database -d -oraclehome /u01/app/oracle/product/18.0.0.0/dbhome_2


7. Restart database with new home via cluster config
srvctl start database -d


8. put the glogin back in place when used
mv $ORACLE_HOME/sqlplus/admin/glogin.sql.save $ORACLE_HOME/sqlplus/admin/glogin.sql


9. Update Golden Gate prm files for new ORACLE_HOME


10. Restart Golden Gate Processes
cd /ggs
./ggsci
start mgr
start jagent
start *

Fix BigFile Tempory Tablespace Tempfile Location for Physical Standby

Summary
Had a physical standby created in the that process DBA that did pysical standby created
missed the remap of temporary tablespaces and the standby was a different diskgroup from
the source database used to create the standby.

Challenge is the temporary tablepaces were BIGFILE temporary tablespaces.
So needed the temporary tablespaces to be created in proper location as the location
specified at this point disk group does not exist.


1. Get standby database in a mount state and ensure managed recovery is stopped.

** If database is open read only will require restart if no then just need to stop managed recovery

Example:
shutdown immediate ;
startup mount ;
alter database recover managed standby database cancel ;


2. Identify the temp files you need to rename/relocate.

select file#, name from v$tempfile ;

     FILE# NAME
---------- ----------------------------------------------------------
         1 +DATA_HC/MYDBPRD/TEMPFILE/temp_user.1059.1038405639
         4 +RECOHC2
         5 +RECOHC2
         6 +RECOHC2


3. using asmcmd ensure new location for temp files exists.

** In this example we want to put tempfiles here: +RECO_HC/MYDBPRD/TEMPFILE
** Note here that DISK GROUP name listed for temp files in prior step is different from disk groups for standby

Example:
. oraenv
+ASM3
ascmd
ASMCMD> ls -l
State    Type    Rebal  Name
MOUNTED  HIGH    N      DATA_HC/
MOUNTED  NORMAL  N      DBFS_DG/
MOUNTED  HIGH    N      RECO_HC/
ASMCMD> cd RECO_HC
ASMCMD> ls -l
Type  Redund  Striped  Time  Sys  Name
                             Y    MYDBPRD/
                             Y    DBM01/
ASMCMD> cd MYDBPRD
ASMCMD> ls -l
Type             Redund  Striped  Time             Sys  Name
                                                   Y    ARCHIVELOG/
                                                   Y    CONTROLFILE/
                                                   Y    DATAGUARDCONFIG/
                                                   Y    ONLINELOG/
CONTROLFILE      HIGH    FINE     APR 27 19:00:00  N    control02.ctl => +RECO_HC/MYDBPRD/CONTROLFILE/current.10755.1037853075
DATAGUARDCONFIG  HIGH    COARSE   APR 29 12:00:00  N    dr2rcmprd.dat => +RECO_HC/MYDBPRD/DATAGUARDCONFIG/ag3rcmprd.9816.1037952543
ASMCMD> mkdir TEMPFILE
ASMCMD> ls -l
Type             Redund  Striped  Time             Sys  Name
                                                   Y    ARCHIVELOG/
                                                   Y    CONTROLFILE/
                                                   Y    DATAGUARDCONFIG/
                                                   Y    ONLINELOG/
                                                   N    TEMPFILE/
CONTROLFILE      HIGH    FINE     APR 27 19:00:00  N    control02.ctl => +RECO_HC/MYDBPRD/CONTROLFILE/current.10755.1037853075
DATAGUARDCONFIG  HIGH    COARSE   APR 29 12:00:00  N    dr2mydbprd.dat => +RECO_HC/MYDBPRD/DATAGUARDCONFIG/mydbprd.9816.1037952543


4. using RMAN rename the tempfiles that you want to new locations.

Example:
rman target=/
run {
set newname for tempfile 4 to '+RECO_HC/MYDBPRD/TEMPFILE/TEMP_USER_01.dbf';
set newname for tempfile 5 to '+RECO_HC/MYDBPRD/TEMPFILE/TEMP_REPORT1_01.dbf';
set newname for tempfile 6 to '+RECO_HC/MYDBPRD/TEMPFILE/TEMPMTS1_01.dbf';
switch tempfile all;
}

Example Output:
RMAN> run {
set newname for tempfile 4 to '+RECO_HC/MYDBPRD/TEMPFILE/TEMP_USER_01.dbf';
set newname for tempfile 5 to '+RECO_HC/MYDBPRD/TEMPFILE/TEMP_REPORT1_01.dbf';
set newname for tempfile 6 to '+RECO_HC/MYDBPRD/TEMPFILE/TEMPMTS1_01.dbf';
switch tempfile all;
}

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

using target database control file instead of recovery catalog
renamed tempfile 4 to +RECO_HC/MYDBPRD/TEMPFILE/TEMP_USER_01.dbf in control file
renamed tempfile 5 to +RECO_HC/MYDBPRD/TEMPFILE/TEMP_REPORT1_01.dbf in control file
renamed tempfile 6 to +RECO_HC/MYDBPRD/TEMPFILE/TEMPMTS1_01.dbf in control file


5. open physical standby database readonly and restart the managed recovery.

sqlplus / as sysdba
alter database open read only ;


6. Check alert log of standby database ensure temp file created properly.

cd /u01/app/oracle/diag/rdbms/mydbprd/mydbprd3/trace
vi alert_mydbprd3.log


7. check the tempfile from ASM3

. oraenv
+ASM3
asmcmd
ASMCMD> cd RECO_HC/MYDBPRD/MYDBPRD3/TEMPFILE
ASMCMD> ls -l
Type      Redund  Striped  Time             Sys  Name
TEMPFILE  HIGH    COARSE   MAY 05 09:00:00  Y    TEMPMTS1_RECO.9679.1039599091
TEMPFILE  HIGH    COARSE   MAY 05 09:00:00  Y    TEMP_REPORT1_RECO.7509.1039599087
TEMPFILE  HIGH    COARSE   MAY 05 09:00:00  Y    TEMP_USER_RECO.10759.1039599073
TEMPFILE  HIGH    COARSE   MAY 05 09:00:00  N    temp_report1_01.dbf => +RECO_HC/MYDBPRD/TEMPFILE/TEMP_REPORT1_RECO.7509.1039599087
TEMPFILE  HIGH    COARSE   MAY 05 09:00:00  N    temp_user_01.dbf => +RECO_HC/MYDBPRD/TEMPFILE/TEMP_USER_RECO.10759.1039599073
TEMPFILE  HIGH    COARSE   MAY 05 09:00:00  N    tempmts1_01.dbf => +RECO_HC/MYDBPRD/TEMPFILE/TEMPMTS1_RECO.9679.1039599091