ioug · web viewthe binary upgrades (several different components) and the database upgrade are...

27
Database R R EDUCING EDUCING D D OWNTIME OWNTIME ON ON U U PGRADES PGRADES OR OR M M IGRATIONS IGRATIONS TO TO O O RACLE RACLE D D ATABASE ATABASE SERVER SERVER April C Sims, Southern Utah University INTRODUCTION This session/paper outlines some of the lesser known options for upgrading Oracle Server or migrating an existing Database to new hardware. Most of the features covered in this session will be specific to a particular version or edition of Oracle but they don’t necessarily require an additional license. Oracle's specific recommendations on reducing downtime for each type of migration path will be covered in this session. An introduction to an alternate method called the Step-Ordered Approach which takes the standard method and splits it into smaller staggered steps for migrating the separate components of Oracle Database. The binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most organizations implement safeguards for unplanned downtime, it is actually the planned downtime that uses most of our resources. Reducing both planned and unplanned downtime will be emphasized in this paper/session. Maximum Availability Architecture (MAA) Oracle’s recommendations. See on Oracle Technology’s Website – http://www.oracle.com/technetwork/database/features/availability/maa-090890.html You will find specifics and case studies for enterprise-wide implementations of Oracle Software. One of the best places to start when beginning a disaster recovery plan for reducing downtime. In particular pay special attention to the Data Guard white papers Best Practices series. There are tuning tips suitable for any customer/situation not just Data Guard and not just Very Large Databases (VLDB). Step-Ordered Approach Most often the project, whether it is an upgrade or migration, consists of several smaller steps that can be implemented in a more step-ordered fashion. Because certain Oracle components are backward-compatible a larger project can be divided into smaller incremental steps reducing downtime, both planned and unplanned. The following recommended order are different than what you find in the following document from My Oracle Support (MOS): Complete Checklist for Manual 1 Session #336

Upload: vuongdiep

Post on 19-May-2018

239 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

RREDUCINGEDUCING D DOWNTIMEOWNTIME ONON U UPGRADESPGRADES OROR M MIGRATIONSIGRATIONS TOTO OORACLERACLE D DATABASEATABASE SERVERSERVER

April C Sims, Southern Utah University

INTRODUCTION This session/paper outlines some of the lesser known options for upgrading Oracle Server or migrating an existing Database to new hardware. Most of the features covered in this session will be specific to a particular version or edition of Oracle but they don’t necessarily require an additional license. Oracle's specific recommendations on reducing downtime for each type of migration path will be covered in this session. An introduction to an alternate method called the Step-Ordered Approach which takes the standard method and splits it into smaller staggered steps for migrating the separate components of Oracle Database. The binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most organizations implement safeguards for unplanned downtime, it is actually the planned downtime that uses most of our resources. Reducing both planned and unplanned downtime will be emphasized in this paper/session.

Maximum Availability Architecture (MAA)Oracle’s recommendations. See on Oracle Technology’s Website – http://www.oracle.com/technetwork/database/features/availability/maa-090890.htmlYou will find specifics and case studies for enterprise-wide implementations of Oracle Software. One of the best places to start when beginning a disaster recovery plan for reducing downtime. In particular pay special attention to the Data Guard white papers Best Practices series. There are tuning tips suitable for any customer/situation not just Data Guard and not just Very Large Databases (VLDB). Step-Ordered Approach Most often the project, whether it is an upgrade or migration, consists of several smaller steps that can be implemented in a more step-ordered fashion. Because certain Oracle components are backward-compatible a larger project can be divided into smaller incremental steps reducing downtime, both planned and unplanned. The following recommended order are different than what you find in the following document from My Oracle Support (MOS): Complete Checklist for Manual Upgrades to 11gR2 [ID 837570.1]. Oracle’s document assumes that all of the steps happen within the same outage period. This step-ordered method will also allow several fall-back positions for the larger overall project. This method can be standard operating procedure from now on as it applies to any upgrade or migration project. As a DBA, it is important to realize that the binary upgrade and the database upgrade are two different events, most often executed at different times. A binary upgrade is the ORACLE_HOME software that is installed, upgraded, and maintained using Oracle-provided tools. A database upgrade is basically updating the data dictionary from one

1 Session #336

Page 2: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

version to another. The following method is different than Oracle’s recommendation of accomplishing all of the steps during a single outage window. Breaking up a large task into smaller chunks gives you multiple safe fall back positions for each shorter outage window. If something in one of the smaller steps doesn't work, back it out, reengineer, and redeploy.In a general sense, Oracle is backwards compatible for making that transition from an earlier version to a later one. The following components can be upgraded to 11g Release 2 while still being compatible with earlier versions of Oracle Database:

Oracle Net Services: LISTENER.ORA, SQLNET.ORA Clients (SQL*Net, JDBC, ODBC ) RMAN Binary, Catalog, and Database Grid Control Repository Database Grid Control Management Agents ASM (Automatic Storage Management) and CRS (Clusterware) PL/SQL Toolkit Transportable Tablespaces (TTS)

Personal Recommended Order of Migration (change as needed for your situation): Install/Patch Oracle Software Binaries in new $ORACLE_HOME – this is Oracle’s

definition of an Out-of-Place Upgrade. See the following document - Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2 [ID 1189783.1]

Pre-Spin Listener in Upper level-versioned Oracle HomeRecommendation- Don’t use the Automatic Registration Listener Port 1521, see - How to Create Multiple Oracle Listeners and Multiple Listener Addresses [ID 232010.1]

Install/Upgrade ODBC, JDBC, SQLPLUS or similar clients Best information for any issues - Client / Server / Interoperability Support Between Different Oracle Versions [ID 207303.1]http://www.oracle.com/technology/tech/java/sqlj_jdbc/htdocs/jdbc_faq.htm#02_02 JDBC, JDK and Oracle Database Certification [Note 401934. 1]

Install/Upgrade/Migrate Cobol, C and Precompilers – consider a move to the highly-recommended separately installed client (can be different node) instead of using the Database $ORACLE_HOME. This creates an environment that can be delegated to other personnel as well as allowing upgrades or patch installations that aren’t as disruptive.

GC and RMAN Repository (not RMAN binaries, that needs to match database version)This is the command upgrade catalog will allow all down level databases to use an RMAN catalog. There is a bug in the Upgrade Catalog command in 11.2.0.2 RDBMS, bug# 10157249.

RMAN> CONNECT CATALOG *connected to recovery catalog database

2 Session #336

Page 3: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

PL/SQL package RMAN.DBMS_RCVCAT version 11.02.00.01 in RCVCAT database is not currentPL/SQL package RMAN.DBMS_RCVMAN version 11.02.00.01 in RCVCAT database is not currentRMAN>RMAN> **end-of-file**RMAN> upgrade catalog;recovery catalog owner is RMANenter UPGRADE CATALOG command again to confirm catalog upgradeRMAN> upgrade catalog;error creating set_site_key_for_single_site_dbsRMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-06004: ORACLE error from recovery catalog database: ORA-00001: unique constraint (RMAN.SITE_TFATT_P) violated

To fix this issue please do the following:1. Make a backup copy of the $ORACLE_HOME/rdbms/admin/recover.bsq file (the $ORACLE_HOME of the rman executable being used)2. Edit $ORACLE_HOME/rdbms/admin/recover.bsq and change:update site_tfatt set site_key = onesite_row.site_key wheretf_key in (select tf_key from df, dbincwhere dbinc.dbinc_key = df.dbinc_keyand dbinc.db_key = onesite_row.db_key);

To read:update site_tfatt set site_key = onesite_row.site_key wheretf_key in (select tf_key from tf, dbincwhere dbinc.dbinc_key = tf.dbinc_keyand dbinc.db_key = onesite_row.db_key);

The sub-query should look for tf_key in TF table (instead of DF table).3. Connect with rman to the catalog database and run the upgrade again.

Upgrade the GC repository database and the RMAN catalog database to the new version, that is one of the best ways to gain practice in using the upper-level version of the RDBMS.

ASM – Oracle has further divided this software/OS owner, job definitions. There is the possibility of also pre-spinning the listener in this ASM home as well.

Upgrade/Migrate Database Upgrading the Optimizer – mostly specific to 11g (both SQL Plan Management and

Optimizer Features) accomplished by changing database initialization parametersThe utilities that will have specific compatibility issues between Oracle versions include both export/import and data pump.

Export the data using the $ORACLE_HOME/bin/exp of the lowest database version involved.

3 Session #336

Page 4: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

Import the data with the $ORACLE_HOME/bin/imp of the target database. EXP/IMP files cannot be interchanged with DATA PUMP files.

See the following support documents for the latest up-to-date information:Compatibility Matrix for Export And Import Between Different Oracle Versions [Doc ID: 132904.1]Export/import data pump parameter version—Compatibility of Data Pump Between Different Oracle Versions [Doc ID: 553337.1]Client compatibility (SQL*Net, JDBC, ODBC)In a general sense, client compatibility is supported on a minimum release (usually what is known as the terminal or last release for older products). In other words, a higher-level client can work with a lower-level database. The clients in this list that have an asterisk (*) will have few issues when used in this mixed environment.

ODBC * SQL*Plus, Instant Client, SQL Developer * JDBC, JDK—Application specific Precompilers—Application specific Export/import or data pump—MOS article, very strict guidelines Database links* 32/bit to 64/bit **—SQL*Plus, C, Cobol, database link PL/SQL features compatibility—New release features will be associated with the

lowest version client Features availability—New release features will be associated with the lowest

version client BEQUEATH connections are not supported between different releases—Unix-

specific Oracle protocol that connect without a listener.Check out my blog post for a specific example using this method: Migrating to 11gR2 – A Case Study on the Step-Ordered Approach http://aprilcsims.wordpress.com/migrating-to-11gr2/

Other Less-Known Methods to Reduce Planned Downtime for Oracle Upgrades

When running catalog.sql, catproc.sql, catpatch.sql or When using the TTS method of upgrading/migrating

Take all other tablespaces other than SYSTEM, SYSAUX, UNDO and ROLLBACK SEGMENTS offline. Either OFFLINE NORMAL or even READ ONLY, as the data in application-specific tablespaces isn't changed during an Oracle upgrade. Any ordinary READ ONLY tablespace will need to be made READ WRITE temporarily after the upgrade, so the data file headers can be updated and then restored to READ ONLY status. See the MOS document Increasing Migration Performance and Recovery time using offline Tablespaces [ID 780318.1] for more details.

4 Session #336

Page 5: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

It would be necessary to check that there are no SYS-owned objects in any off-lined tablespaces. These same steps would also save time if you need to downgrade or restore the database from a failed upgrade attempt.Consolidated Reference List Of Notes For Migration / Upgrade Service Requests [ID 762540.1]762540.1 Note References for Migration/Upgrade Requests455744.1 Best Practices to Minimize Downtime During UpgradeExport/Import and DataPumpMost of the information provided is devoted to the new utility Data Pump but there may be specific conditions where the traditional EXP / IMP utilities are actually faster. See MOS Note 286496.1 for how-to trace long running Data Pump jobs. There is usually some sort of technical limitation that requires an upgrade using exp/imp utilities. This example covers one of the most common- Converting to a new Characterset that is not a superset of the existing set. Most often this is to convert data to Unicode – MOS Note 306411.1Three possible methods to converting charactersets:

Using Oracle’s csalter utility – small amounts of data to be converted, no need to create a new database, requires extensive testing and DBA resources, faster for databases starting on US7ASCII

Export/Import – suitable for small databases and/or small amounts of convertible data Data Pump – for larger databases and/or larger amounts of data that needs to be

convertedCharacterset Conversion Project Overview – EXPORT/IMPORT or DATA PUMP

A new target database precreated with new characterset. Make sure the source database schema-level statistics are gathered and saved/exported –

DBMS_STATS. EXPORT_SCHEMA_STATS You will be creating a new database so you don’t want data dictionary, fixed objects (DBA views) or system statistics from the source database.

A full database export on the source database excluding indexes, statistics, and constraints.

Import into the target database with new characterset but exclude the indexes, statistics, and constraints.

Export indexes and constraints (DDL commands) to a file. Run index, constraints script by parallel execution. Google - paresh Parallel Execution

Scripthttp://orajourn.blogspot.com/2008/01/datapump-index-work-around.htmlSee MOS Using Parallel Execution [ID 203238.1] Extract SYS grants from source database, run in target database. Import saved statistics into target database. Compare schemas on both databases to check for any missing objects. – 3rd party

products such as TOAD

Note on Characterset Selection:

5 Session #336

Page 6: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

US7ASCII: better to migrate to WE8MSWIN1252, or WE8ISO8859P15 etc. WE8ISO8859P1: WE8MSWIN1252 is a superset UTF8: better to migrate to AL32UTF8 ZHS16CGB231280: ZHS16GBK is a superset ZHS32GB18030: better to migrate to AL32UTF8 KO16KSC5601: KO16MSWIN949 is a superset ZHT16BIG5: ZHT16MSWIN950 solves various problems of ZHT16BIG5

See Blog for specific case study with a small database - http://aprilcsims.wordpress.com/migrating-to-alu32utf8/Improving Performance for EXPORT/IMPORT UtilitiesQuote from documentation - “Data Pump Export and Import are self-tuning utilities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import. Sequential media, such as tapes and pipes, are not supported.” Data Pump – can use four different mechanismsThere are data-related structure differences (character type, table type, column type, encryption, VPD, constraints) that may force Data Pump to use a particular mechanism.

data file copying - Transportable Tablespaces – fastest method because the data is not converted, only metadata.

direct path – Second fastest, default method used by Data Pump utility. external table – Used when parallel SQL, external table used for mapping, SQL engine

moves data, this is also the mechanism when NETWORK_LINK is used during exports only.

network link import – slowest, uses INSERT SELECT statement over a database link.Use the following MOS article to for more information on tuning the two most commonly used methods – direct path vs. external table. Export/Import DataPump Parameter ACCESS_METHOD - How to Enforce a Method of Loading and Unloading Data ? [ID 552424.1] Data Pump Parameters Worth MentioningBUFFER – be sure and increase this from the default sizePARALLEL – Won’t really help with small jobs or large amounts of metadata.

Set Parallelism =2X CPUs, adjust if needed. expdp PARALLEL < or = to the number of dump files. impdp, PARALLEL should not that much larger than the number of files in the dump file

set. PARALLEL > 1 is only available in Enterprise Edition ESTIMATE (BLOCKS or STATISTICS) vs. ESTIMATE_ONLY - Y, N (both export and

estimate) METRICS=Y –Undocumented parameter adds more info related to the number of objects and the time it took for processing them. Output goes into the job logfile.Bugs – several version-specific bugs are out there, please research/test before using in a production environment. See the MOS document for a list of key issues: Checklist for Slow Performance of Export Data Pump (expdp) and Import DataPump (impdp) [ID 453895.1]

6 Session #336

Page 7: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

Initialization Parameters on the Target Database – How to drastically improve import performanceConsider turning off non-essential initialization parameters, other performance-hogging features and increasing certain parameters just before starting datapump on the target database. Don’t forget to adjust parameters back, turn autoextend off when finished.

Turn off archivelogmode Turn off flashback Disable Recycle bin Turn off auditing Turn db_block_checking off– system tablespace is always turned on, Block checking may

be from 1% to 10% overhead. DB_BLOCK_CHECKING to LOW or OFF in versions 10.2 and 11g or FALSE in 10.1 and 9i.

_disable_logging = TRUE pga_aggregate_target sort_area_size shared_pool_size sga_max_size parallel_max_servers CLUSTER_DATABASE=0 PARALLEL_SERVER=0 job_queue_processes=0 aq_tm_processes=0 grant exempt access policy to “userdoingexportimport” ; --removes any VPD issues ALTER SYSTEM SET max_dump_file_size = unlimited SCOPE = both; CURSOR_SHARING

The larger the database the better performance can be realized during the import process by splitting the export indexes, constraints, and referential constraints (metadata) into a separate file from the data on the source database. There is a package that will help speed up building partitioned indexes - DBMS_PCLXUTIL.BUILD_PART_INDEX, research on MOS for more information.Overview of the Migration Steps:

1. Precreate target database, decide whether to keep the current data file locations. With standard IMP utility, change tablespace/datafile locations by precreating the tablespaces that you are importing. Make sure these aren’t overwritten with import command. imp destroy=nData Pump(impdp) has the REMAP_DATAFILE, use a parameter file because of the syntax:REMAP_DATAFILE=double-quotes | single-quote | datafile name in the source format | single-quote | : |single-quote | datafile name in target format | single-quote | double-quotes | comma | double-quotes | single-quote...

7 Session #336

Page 8: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

2. Open source database in restricted mode if at all possible, at least do a consistent export with a privileged account that is not SYS…” SYSDBA is used internally in the Oracle database and has specialized functions. Its behavior is not the same as for generalized users. For example, the SYS user cannot do a transaction level consistent read (read-only transaction). Queries by SYS will return changes made during the transaction even if SYS has set the transaction to be READ ONLY. Therefore export parameters like CONSISTENT, OBJECT_CONSISTENT, FLASHBACK_SCN, and FLASHBACK_TIME cannot be used.” How to Connect AS SYSDBA when Using Export or Import [ID 277237.1] Interesting research on data pump export flashback_time and flashback_scn athttp://yong321.freeshell.org/oranotes/DataPump.txt

3. Be sure that the default temp tablespace and default tablespace for the user performing the import can autoextend. Add additional undo tablespace or increase the value of UNDO_RETENTION. First run of the export includes all of the data but excludes items that will be imported or created at different times than the import.expdp parfile=expdp_export_full.parfile > expdp_export_full.out 2>&1 &

expdp_export_full.file contains:directory=DATA_PUMP_DIRdumpfile=${ORACLE_SID}_full_%U.dmplogfile=${ORACLE_SID}_export_full.logfull=yparallel=12 –see notes abovemetrics=y –undocumented parameteruserid="system/password"EXCLUDE=TABLESPACE,STATISTICS,INDEX,CONSTRAINT,REF_CONSTRAINT

4. Export indexes, constraints, and referential constraints into a file

expdp parfile=expdp_export_indexes.parfile > expdp_export_indexes.out 2>&1 &

expdp_export_indexes.parfile contents:directory=DATA_PUMP_DIRdumpfile=${ORACLE_SID}_indexes_%U.dmplogfile=${ORACLE_SID}_export_indexes.logfull=ymetrics=y --undocumented parameteruserid="system/password"include=INDEX,CONSTRAINT,REF_CONSTRAINT

5. Then to format the DDL correctly for the database version/characterset/NLS settings you need to run the import command on the target database. This only creates the resulting sql file, no actual import is accomplished at this step.

impdp parfile=impdp_indexes.parfile > impdp_indexes.out 2>&1 &

impdp_indexes.file contains:8 Session #336

Page 9: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

directory=DATA_PUMP_DIRdumpfile=${ORACLE_SID}_indexes_%U.dmplogfile=${ORACLE_SID}_index_sqlfile.logfull=ymetrics=yuserid="system/password"sqlfile="index_sqlfile.sql"

6. Import the data into the target database excluding indexes, constraints, referential constraints and statistics. DATA PUMP serializes the creation of the excluded items which can be speed up using other methods. Consider using the REMAP_TABLESPACE parameter to move imported items into a previously created tablespace.impdp parfile=impdp_alldata.parfile > impdp_alldata.out 2>&1 &

impdp_alldata.file contains:directory=DATA_PUMP_DIRdumpfile=${SOURCE_ORACLE_SID}_full_%U.dmplogfile=${TARGET_ORACLE_SID}_import_full.logfull=ymetrics=yparallel=12userid="system/password"

7. Run DDL-sqlfile created earlier, recreate indexes, rebuild partition indexes, investigate examples of parallelizing this part.

8. Spool any direct grants which won’t be migrated to the TARGET database during the IMPDP process. Review the script, it may be advisable to add the list of grantees to be excluded such as the default roles (DBA, EXP_FULL_DATABASE, etc).set head offset pagesize 0set feedback offspool grants_from_sys.sqlselect ‘grant ‘||privilege||’ on ‘||owner||’.'||table_name||’ to ‘||grantee||’ ‘||decode(grantable,’YES’,'WITH Grant option’)||’;'from dba_tab_privswhere owner = ‘SYS’/select ‘grant ‘||privilege||’ (‘||column_name||’) ‘||’ on ‘||owner||’.'||table_name||’ to ‘||grantee||’ ‘||decode(grantable,’YES’,'WITH Grant option’)||’;'from dba_col_privswhere owner = ‘SYS’;

An older method of parallelizing using the standard EXPORT/IMPORT utilities.Oracle Export/Import Example Using a UNIX pipe process running in parallel http://dbarajabaskar.blogspot.com/2010/08/oracle-exportimport-using-unix-pipe.html

DBUA vs. Manual Upgrade

9 Session #336

Page 10: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

There is no significant difference in the performance between DBUA and the manual method of upgrading because the largest block of time is updating the data dictionary to the new version.The Manual Method has more flexibility for migration projects:

1. Transportable Tablespace2. Different Servers 3. Physical or Logical Standbys

Oracle specifically mentions that the manual method of upgrading on different severs must be the same OS version and Database version. Recently did a migration project moving a database to a new server with a single-level higher version of the Operating System (Linux RH 4 to 5) by doing a switchover to a physical standby. The database was the same version. Both servers were taken down, renaming the new one with the same IP/hostname. This prevented any DNS caching issues or as well an untested configuration of a hardware load balancer. See the section later on adjusting SQLNET for a change in hostname and/or IP address with an installed ORACLE Server.Automatic Temp File Recreation on StartupSince 10g if any of the temp files are missing they are automatically recreated on startup. Easy to recreate by cycling the database, especially if you can’t find the original creation information:

1. If temp files are accidentally deleted, renamed, become corrupted or otherwise not available.

2. No longer need to document the tempfile information, this is stored in the controlfile. 3. When cloning or copying a database to another server, tempfiles are not needed. This

procedure is different than using the Recovery Manager (RMAN) duplicate/backup/recover commands. RMAN doesn’t backup temp files. Recovery Manager and Tempfiles [ID 305993.1] – this doc mentions that RMAN backs up tempfiles but that is incorrect since later versions of Oracle, temporary tablespaces are locally managed tempfiles not datafiles. When taking a backup of a database only the datafiles are included.

Recovery Manager (RMAN)Upgrading/Down Grading a Database During an RMAN Restore/Recovery SessionRMAN can directly downgrade or upgrade a database. This is particularly useful during migration projects when you need to create a clone/copy of a database that is a different Oracle version. Reasons to use this method:

Migrating between one-off operating system levels. Changing database word sizes (32-vit to 64-bit and vice versa). No need to install multiple ORACLE_HOMES of the different versions, just the

one you are migrating to. This is assumed you are working with a different server than the original.

10 Session #336

Page 11: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

Can be used for one-off patches, patchsets or single version differences. For example: 10.2.x to 11.2.x, 10.2.0.1 to 10.2.0.4, 11.2.0.1.0 to 11.2.0.4.4. Just be aware of any post-patch steps that might have to be executed against a database. You will find these as part of the readme’s for each version involved. These post upgade tasks typically include catalog.sql, catproc.sql, catpatch.sql and utlrp.sql. Follow the steps used for a manual upgrade/downgrade – search in MOS for the Documents that start with the keywords Complete Checklist for Manual Upgrades.

Great for a trial restore of a critical database – test your RMAN restores and recoverability.

Useful for a situation where you need to downgrade to different than original version you upgraded from.

Use it for cloning a database using a user-managed backup (commonly called cold or hot backups) – in this case you would need to use the RMAN catalog command for the datafile copies.

Transportable Tablespaces (TTS) is a different method than what is outlined in this section. TTS would be more appropriate for cross-platform migrations, fast database upgrades on existing hardware (with no change in the original datafile location). This allows you to skip the step where you would first need to clone (like the RMAN duplicate command) to the same binary version as the original and then finish the database upgrade/downgrade.See Oracle Database Backup and Recovery User's Guide 11g (Chapter 19, Performing RMAN Recovery: Advanced Scenarios)Look for the section labeled, Restoring a Database on a New Host for more details that aren’t included in the following steps.This type of restore/recovery in order to upgrade/downgrade scenario cannot be used in conjunction with the RMAN duplicate database command. Also be careful to use the NOCATALOG mode of RMAN recovery when you are attempting this on the same host as the original database, see MOS Note: 245262.1Steps:

1. Install higher-versioned Oracle software. Create oratab entry – ORACLE_SID same as original. Use NID to change later if desired. Create necessary directories, can change datafile locations using set newname as part of the RMAN command.

2. Run Pre-Upgrade Tool. 3. Make backups available on the server you are restoring to.4. Set environmental variables, run oraenv, start RMAN.

RMAN> CONNECT TARGET /RMAN>SET DBID XXXXXXXXXX; --RMAN>STARTUP NOMOUNT

5. Recover spfile or create a new pfile. If upgrading from 10.x the spfile won’t be included if the controlfile is configured for autobackup.

11 Session #336

Page 12: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

6. Restore the controlfile, then mount the database.run {SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/backuplocation/%F';restore controlfile from autobackup;alter database mount;}

7. Restore datafiles, recover the database. The following example restores to a certain point in time. RUN{set until time = "to_date('02/05/11:16:00:00','MM/DD/YY:HH24:MI:SS')";# restore the database and switch the datafile names, this example will restore the # datafiles to their original location.RESTORE DATABASE;SWITCH DATAFILE ALL;RECOVER DATABASE;}

8. Open the database using the special command. This step can also be accomplished at the RMAN command-line. SYS@ORCL> or rman>  alter database open resetlogs upgrade;

9. Finish the upgrade by following the standard manual upgrade method….there may be more post steps than what is listed. The shortened example outlined just highlights the differences between using RMAN and a traditional manual upgrade.SYS@ORCL> @$ORACLE_HOME/rdbms/admin/catupgrd.sql --or—@$ORACLE_HOME/rdbms/admin/catdwgrd.sql

Answers To FAQ For Restoring Or Duplicating Between Different Versions And Platforms [ID 369644.1]How to Recover a Drop Tablespace with RMAN [ID 455865.1]Steps To Recover A Dropped Tablespace Using TSPITR [ID 1277795.1]

Datawarehouse (Read-Only Tablespaces) RMAN Backup over Several Days MethodThe following example runs the backup command for 8 hours, at the next execution RMAN takes up where it left off. RMAN> BACKUP DATABASE NOT BACKED UP SINCE ‘SYSDATE-3’ DURATION 08:00 PARTIAL MINIMIZE TIME;

Can save backup time by selective tablespace backups Backup index tablespaces less frequently than data tablespaces Backup scarcely used tablespaces less frequently Reduce restore time for most critical tablespaces, by grouping them together in

separate backups12 Session #336

Page 13: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

TTS – Same OS or Cross-PlatformMaster Note for Transportable Tablespaces (TTS) -- Common Questions and Issues [ID 1166564.1]How to Share Tablespace Between Different Databases on Same Machine [ID 90926.1]How To Recreate a database using TTS (TransportableTableSpace) [ID 733824.1]One of the quickest solutions might be the usage of TTS, the difference in time as needed is due to the fact that TTS:

Exports only the metadata of the objects, not the physical data (rows) Indexes don’t have to be recreated Does require advance work to identify/isolate/move both transportable and non-

transportable objects. Excellent DBA training project! 10g+ across OS platform versions Standard Edition can only import TTS Both must be the same characterset Ok to change word-size Data Pump Compatible Parameter NLS Conversions Block Size (Older than 10g require same block size)

Compatibility and New Features when Transporting Tablespaces with Export and Import [ID 291024.1]It is always possible to transport a tablespace from a database running an older release of Oracle (starting with Oracle8i) to a database running a newer release of Oracle (for example, Oracle9i or Oracle10g).Clones, DBIDs, and incarnationsYou can clone a database with the older method of user-managed backups, but it is easier to do this with RMAN. That is because RMAN changes the DBID and ORACLE_SID (using the utility NID) as part of the DUPLICATE procedure. This is another step you would have to do manually if you used the user-managed type of cloning a database. The following RMAN query will show the current DBID as well as any incarnations:RMAN> list incarnation;

The following query will also find the current DBID:SYS@ORCL> select to_char(dbid) from v$database;

DBID is also a part of the controlfile autobackup filename with the format of %F = c-<dbid>-<yyyymmdd>-<sequence number> like the following file:c-3416182518-20100115-01

It is also found in the alert_$ORACLE_SID.logLNS: Standby redo logfile selected for thread 1 sequence 13442 for destination LOG_ARCHIVE_DEST_7Fri Feb 25 00:17:49 2011Archived Log entry 63757 added for thread 1 sequence 13441 ID 0x94a1a09 dest 1:

13 Session #336

Page 14: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

Archived Log entry 63758 added for thread 1 sequence 13441 ID 0x94a1a09 dest 5:Fri Feb 25 00:28:00 2011DBID=112298272

A script is provided as the code section for this chapter that gives you a quick way to duplicate a database to another server using RMAN. This script would result in exactly the same ORACLE_SID and DBID—called rman_diff_server.sql. If you want a different ORACLE_SID and DBID, then investigate the use of the NID utility. Also, be aware that if you backup both databases with the same DBID using an RMAN catalog repository, then the information in the repository is going to get replaced each time the catalog does a resync command. A way around these types of issues is to give each database a different tag as part of the backup and recovery commands, making each backup identifiable.Using the database parameters to change the location of the database files on a different server is easier to maintain over time. Be careful cloning multiple databases on a single server—you could accidentally overwrite database files.db_create_file_dest --creates everything in one locationdb_file_name_convert –converts location for data,temp,redo to anotherlog_file_name_convert –converts location for archivelogs to anotherPost-cloning tasksHere is a list of suggested housekeeping tasks for a newly cloned database. This is to clean up after a previous copy of the database has been removed. These steps are intended for a copy of production for testing use, not for a standby:

Remove old trace, dump, audit files, alert logs Obsolete and expire old backups, exports, data pump files Remap database directories Adjust database links Revisit auditing, programmer access Register the database with RMAN catalog Rerun RMAN configure commands Turn off archiving if not needed

PatchSet installation— with a cloned $ORACLE_HOMECloning is the easiest method for creating a copy of an ORACLE_HOME in order to apply further single patches. Please note that 11.2 Patch Sets 11.2.0.2 and higher are supplied as full releases. See Note:1189783.1 for details. This seems to be forcing the issue that all upgrades are out-of-place upgrades – requiring a new $ORACLE_HOME. This procedure is suitable for all Oracle Software not just Database. It is best to use the Database Installation Guide instructions for cloning an Oracle Database Oracle Home.It is recommended to have at least two $ORACLE_HOMEs—one for production and another for testing patches at all times. The database can only be opened and used in a single ORACLE_HOME. The other homes not currently being used are upgraded and configured in advance of any database changes required in a new release. That is the last step in migration to a new release applying the database changes. Often I will have at least three ORACLE_HOMEs on a server at any one time—current production, patchset of production, and the new major or maintenance release home.

14 Session #336

Page 15: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

Cloning or creating another new ORACLE_HOME for patching is called an out-of-place patchset apply in the Oracle documentation. See the following document for how to use OUI -based command-line utilities to clone existing ORACLE_HOMES. The clone is an exact copy of the current $ORACLE_HOME including any one-off patches. You can also clone this $ORACLE_HOME to other hosts. This process keeps the OraInventory correctly configured for all $ORACLE_HOMES vs. using UNIX tar command to make another copy.FAQs on RDBMS Oracle Home Cloning Using OUI [ID 565009.1] see also Oracle® Universal Installer and OPatch User's Guide 11g Release 2 (11.2) for Windows and UNIX – 6 Cloning Oracle SoftwareStandbys -Data Guard + Flashback This section is about the Oracle RDBMS Enterprise Edition functionality that allows you to full utilize your DataGuard standbys for testing purposes as part of a migration/upgrade project. The use of Data Guard to switchover to a new primary has certain technical limitations for upgrades (except the Transient Logical Standby Procedure). Data Guard will be one of the fastest ways migrate to new hardware, change in word size or change in storage as well as protection in case of a major outage on the primary.Flashback and guaranteed restore pointsFlashback technology allows you to roll back or undo queries, changed data in tables, dropped tables, or even the entire database. A Flashback database can be used to revert logical corruption, patch, or a hot fix, but it rolls back all transactions that can be disruptive in a production instance, depending on when and how the original transactions were created. The key to using Flashback is sizing the Flash Recovery Area large enough (similar to archived redo log generation rate) but not too large as space pressure drives the automatic cleanup mechanisms.This reversion of all the transactions is the same behavior as when you would perform a complete restore of the entire database from a backup. It is easier and less disruptive to use Flashback on a physical standby rolling it back to a time before the issue occurred. Use SQL*Plus, export, or data pump to move the missing or changed data back into production. The production instance is still up and running during all of this time, with minimal disruption to the few affected users. A restore point is a point in time that allows you to rollback to a clearly marked point (SCN).Start the process by canceling Redo Apply on the physical standby and taking a guaranteed restore point:STANDBY> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;STANDBY> CREATE RESTORE POINT HOTFIX1 GUARANTEE Flashback DATABASE;

The above command sends all current data from the production instance and then stops the Redo Apply process temporarily to the physical standby where testing will occur. All redo shipping to other archive destinations from the primary in the same configuration is not affected by this interruption. Oracle Support recommends turning off the Data Guard when using SQL commands to make changes to the configuration, otherwise it will enable the archive destination automatically.PRIMARY> ALTER SYSTEM ARCHIVE LOG CURRENT;PRIMARY> ALTER SYSTEM SET DG_BROKER_START=FALSE;

15 Session #336

Page 16: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

PRIMARY> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;

There is an option of changing the database state instead of deferring the archive destination and turning off the broker, but this can only be done with dgmgrl. See later in this section for the list of databases states available for the different types of standbys.oracle@primaryservername:/u01/app/oracle[PRMY]DGMGRL>connect sys/password;DGMGRL> EDIT DATABASE 'PRMY' SET STATE='TRANSPORT-OFF';

Create Guaranteed Restore Point WITHOUT Enabling FlashbackSQL> CREATE RESTORE POINT <rpname> GUARANTEE FLASHBACK DATABASE;

Still creates flashback logs, so other initialization parameters related to FRA must be configured.

Saves flashback log space for workloads where the same blocks are repeatedly updated, nightly batch loads

This process generates both UNDO and REDO resulting in more area used Drop guaranteed restore point immediately reclaims all space vs. more steps to

disable Flashback Database.

Tuning Flashback Recommendations from Oracle:“Flashback retention should be set >= 60 minutes (also for Data Guard Fast-Start Failover environment). This is because Oracle writes a metadata marker (used for FB DB operation) into FB logs every 30 mins and so, setting retention under 60 mins where there is space pressure could delete a needed marker and thus render FB DB unusable for some portion of time. Setting FB retention >= 60 mins guarantees that we will have at least 2 markers always available in FB logs.Maintaining flashback logs imposes comparatively limited overhead on an Oracle database instance. Changed blocks are written from memory to the flashback logs at relatively infrequent, regular intervals, to limit processing and I/O overhead. To achieve good performance for large production databases with Flashback Database enabled, Oracle recommends the following:- Use a fast file system for your Fast Recovery area, preferably without operating system file caching. Files the database creates in the Fast Recovery area, including flashback logs, are typically large. Operating system file caching is typically not effective for these files, and may actually add CPU overhead for reading from and writing to these files. Thus, it is recommended to use a file system that avoids operating system file caching, such as ASM.- Configure enough disk spindles for the file system that will hold the Fast Recovery area. For large production databases, multiple disk spindles may be needed to support the required disk throughput for the database to write the flashback logs effectively.- If the storage system used to hold the Fast Recovery area does not have non-volatile RAM, try to configure the file system on top of striped storage volumes, with a

16 Session #336

Page 17: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

relatively small stripe size such as 128K. This will allow each write to the flashback logs to be spread across multiple spindles, improving performance- For large, production databases, set the init.ora parameter LOG_BUFFER to be at least 8MB. This makes sure the database allocates maximum memory (typically 16MB) for writing flashback database logs.The overhead of turning on logging for Flashback Database depends on the mixture of reads and writes in the database workload. The more write-intensive the workload, the higher the overhead caused by turning on logging for Flashback Database. (Queries do not change data and thus do not contribute to logging activity for Flashback Database.)”Metalink Note 565535.1 Flashback Database Best Practices & PerformanceFlashback related FRA sizing http://download-west.oracle.com/docs/cd/B19306_01/backup.102/b14192/rpfbdb003.htm#BABJJCHF

Possible testing/recovery scenarios for Flashback and Data GuardThe following is a list of different reasons to use a physical standby, other than just to failover when the primary database is not available:

Preventing or fixing physical corruption Fixing logical corruption Reversing an application vendor upgrade Batch job reversal Untested hot fix Untested Oracle patch Stress testing Testing Oracle upgrades Testing ASM, OMF, SAME, or OFA changes Testing hardware updates or changes Testing OS upgrades, patches, or changes Testing Network or SQL*Net parameter changes Real Application Testing **additional license SQL performance analyzing

Physical corruption on a primary database can't be transmitted to the standby if the data files exist on a separate file system and the members in a configuration don't participate in hardware-level mirroring. With db_block_checking and db_block_checksum enabled on the primary and db_block_checksum on the physical standby, it can detect any physical corruption before applying redo.There is always a warning when enabling db_block_checking and/or db_block_checksum, as it may overload an already CPU-intensive environment. Be careful to monitor before putting these settings into a production environment. If the

17 Session #336

Page 18: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

physical corruption is extensive enough to prevent the primary database from being open, then failing over to the physical standby would be the best option.Lost-write detection using a physical standby databaseLost-write database corruption happens when the I/O subsystem has acknowledged the completion of a block write but in actuality the write did not make it to disk. This type of corruption is detected by Data Guard comparing SCNs of blocks in the redo stream on the primary to the SCNs of blocks on the physical standby.If the block SCN on primary is lower than standby—ORA-752—the lost-write happens on the primary. If the SCN on the primary is higher than the standby—ORA-600 [3020], then it is a lost-write on the standby. If the lost-write is on the standby it is unusable and the standby database will have to be removed and recreated.Detection of a lost-write on the primary halts the managed recovery process on the standby and recovers to the consistent SCN. At that point it is recommended to failover because the physical standby is currently the most consistent as compared to the primary database. Any further transactions that happened on the primary after the SCN are assumed to be lost or in other words unrecoverable. Refer to the documentation for Steps to Failover to a Physical Standby After Lost-Writes Are Detected on the Primary. This capability is controlled by the database parameter DB_LOST_WRITE_PROTECT and has different settings with FULL, NONE, or TYPICAL. The default is NONE.Database statesOracle Database 10g+ gives us different DATA GUARD states that are tied to a database's role in a Data Guard configuration that control the log transport services. This is basically a switch that governs whether data is being transferred from one database to another and/or being applied depending on the database role of primary,physical, or logical.States of log services are as follows:Primary

TRANSPORT-ON Primary TRANSPORT-ON + Physical TRANSPORT-ON=Active Data Guard

TRANSPORT-OFF Physical standby (REDO APPLY) APPLY-ON APPLY-OFF Snapshot standby (REDO APPLY) APPLY-OFF Logical standby (SQL APPLY) APPLY-ON APPLY-OFF

There is no APPLY-ON available for snapshot because it would no longer be a snapshot of the data at a point in time. This ability to turn off the transport and/or the application

18 Session #336

Page 19: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

of redo logs gives you the flexibility in using the standbys for multiple tasks temporarily and then changing the state back on.Can transportable tablespaces be created from a read-only standby database? [ID 403991.1] Basically NO.Rolling Upgrade with a Transient Logical Standby – purported to be the smallest amount of downtime for any Oracle upgrade. Takes extensive resources to setup and test.Oracle11g Data Guard: Database Rolling Upgrade Shell Script [ID 949322.1]Standbys in a Heterogeneous Environment using Commodity hardwareOur organization utilizes inexpensive commodity hardware, where the trade-off for less durability is compensated by running more standbys. This reduces our costs overall while ensuring a more robust testing and disaster recovery environment.Certain Data Guard configurations can also run in a mixed Oracle binary environment—64-bit and 32-bit, while part of the same operating system family (Linux, Solaris, AIX, among others), making physical standbys adaptable to more environments. You can mix hardware from different manufacturers; the number of CPU's, RAM, storage differences, processor, operating systems versions, and distributions will provide even more flexibility in designing the architecture (see MOS Note: 413484.1 for exact details). While this sounds good, it will have tradeoffs, which may include reduced performance due to the differences in capacity as well as increased complexity that may interfere with a smooth transition from primary tothe standby site.There are some major issues with working in a mixed environment—lack of good documentation, reverting to older technology for backups and restores (not RMAN) and the possibility of more errors or problems during switchover and/or failovers.Data Guard cannot be used, only the SQL*Plus command-line for mixed environments in 10gR2; this limitation is removed as of 11g. Also, note that in a mixture of 32- and 64-bit environments that an extra step has to be done before switching over (see MOS Note: 62290.1 Changing between 32-bit and 64-bit Word Sizes).An example of an issue found in a heterogeneous environment with a mixture of 32-bit Linux and Windows 11.1.0.7 Physical Standbys is as follows: ORA-16191: Primary log shipping client not logged on standby. The password file was interpreted with a different case than what it was created with. This was fixed by turning off the case-sensitivity option by changing the spfile parameter SEC_CASE_SENSITIVE_LOGON=FALSE, creating the password files on both servers using the same password, and passing ignorecase=Y to the orapwd utility.32bit to 64bit Migrating/Upgrading

Word Sizes- what utilities can be used besides SQLPLUS to convert between 32-bit and 64-bit word sizes as well as 32-bit and 64-bit Operating Systems?

EXPORT/IMPORT How to Use Export and Import when Transferring Data Across Platforms or Across 32-bit and 64-bit Servers [ID 277650.1]

Transportable Tablespaces can convert as well, beginning with 10g can always be done with the same or higher compatibility setting.

RMAN – same OS platform. Restoring A 32 bit Database to 64 bit – An Example [ID 467676.1]

Oracle Upgrades – catpatch.sql, catalog.sql , catproc.sql If you are changing word-size during a migration, upgrade, or downgrade operation running the appropriate script changes the word-size.

19 Session #336

Page 20: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

SQLPLUS – for changing word-size in between releases Changing between 32-bit and 64-bit Word Sizes [ID 62290.1]

Moving to NEW Hardware ScenarioNeed to migrate to new hardware and also accomplish other changes such as OFA/ASM, upgrading in the OS or Oracle upgrade/patchset. Options/Methods:

See an earlier section above- Upgrading/Down Grading a Database During an RMAN Restore/Recovery Session – Using an RMAN backup to move the database.

Moving ASM with RMAN- little known method of the AUXILIARY DESTINATION parameter and specify the ASM disk group name to move datafile locations. RMAN> recover tablespace users until logseq XX auxiliary destination '+DATA' ; http://blogs.oracle.com/AlejandroVargas/2007/04/rman_transportable_tablespace.html

Small database, larger outage window available, perhaps standard edition (DATA GUARD not available). Shutdown the source database on the source server, copy over datafiles, spfile/pfiles, controlfiles, *.ora files, homegrown scripts, etc (don’t copy any temp files as they are automatically recreated on startup). Go ahead and fix any of the changes mentioned later in this section for SQLNET services.

Transportable Tablespaces would be one of the fastest methods and could be coupled with an upgrade in the shortest amount of time especially if the datafiles are on shared storage between the servers.

DATA GUARD switchover to a physical standby, or use the method Rolling Upgrade with a Transient Logical Standby. Using a Transient scenario promises the smallest amount of downtime but takes quite a few resources to setup and test fully.

What if you need to change hostname/IP address of Oracle Database Server? LISTENER_NETWORKS allows you to resolve/change the listener name alias for

other listeners through a local tnsnames.ora This doesn’t work with Transparent Application Failover (TAF).

Oracle Internet Directory, LDAP, Oracle Connection Manager Do a server rename/ip address change on the new server. Oracle RDBMS survives

a hostname/IP address change but you do have to account for any SQLNET-related services that might be affected as in the following list.

LISTENER.ORANAMING RESOLUTION

LOCAL (server) TNSNAMES.ORA ORACLE NAMES (9.2 and below) LDAP.ORA (OID or Active Directory) HOSTNAME HOSTS FILE SQLNET.ORA

DATABASE PARAMETERS LOCAL_LISTENER – can be adjusted by adding/editing the local tnsnames.ora on

the server20 Session #336

Page 21: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

REMOTE_LISTENER DISPATCHERS SERVICE_NAMES DB_DOMAIN FAL_CLIENT and FAL_SERVER LOG_ARCHIVE_DEST_N LISTENER_NETWORKS (11.2+) - All listeners within the same network_name will

cross-register.

GotchasGrid Control Agent doesn’t survive a HOSTNAME change but can survive an IP address change with a corrected HOSTS file on the server. Grid Agent Configuration: Steps to Re-configure a Grid Control Agent if the Hostname Changes [ID 756870.1]If you only change the IP address then you may experience DNS caching where the new IP address/hostname combination has not been propagated through the network. Local DNS caching is based on the TTL (time-to-live).Preparing For Changing the IP Address, Hostname or DOMAIN of Oracle Database Servers [ID 363609.1]

ONGOING RESEARCH, FEEDBACK AND ADDITIONAL INFORMATION

Oracle High Availability Blog http://aprilcsims.wordpress.com/

#Scenario to restore a database to a different server but in exactly the same location, the database was shutdown when doing the backup#Warning! If you do this on a production server it will overwrite files!#scp or sftp the rman created backups in the same location as the original database on the new server#Create any directories needed for Diagnostic Destination#Create directories for datafile locations# Add a line similar to the following to /var/opt/oracle/oratab or /etc/oratab depending on Unix flavor# Adjust as needed for your location of the Oracle binary installation# YOURSID:/u01/app/oracle/product/11.2.0:N# The following sets your Oracle Home location # export ORACLE_SID=YOURSID. /usr/local/bin/oraenv#This scenario below will restore the spfile and controlfiles#The following line will connect to the local controlfile rman target / nocatalog # You will need the DBID info - check partial filename of autobackup of controlfile

21 Session #336

Page 22: IOUG · Web viewThe binary upgrades (several different components) and the database upgrade are different events, which most often should be executed at different times. While most

Database

set DBID YOURDBIDNUMBERstartup nomountrestore spfile from'/backuplocationyoucopiedthespfileto';#Copy the controlfile backups to the proper locationhost "cp /backuplocation/backupcontrolfilename/toordatalocation";shutdownexit#Connect again to restore the datafiles which are located in the same area you backed them up on the other server.rman target / nocatalogstartup mountrestore database;recover database;# The following is required because there won't be any online redo logs available, that is normal when doing a full recovery.alter database open resetlogs;shutdown immediate;exit

sqlplus "/as sysdba"startup

22 Session #336