Quantcast
Channel: FAQ – E-Business Suite General – Oracle Database – Helios's Blog
Viewing all 30 articles
Browse latest View live

ERROR 1033&Archival Error RECEIVED LOGGING ON TO THE STANDBY

$
0
0

ERROR 1033 RECEIVED LOGGING ON TO THE STANDBY

I got email from our OEM such as ” PROD EM Event: Critical:XXXDG – The database status is UNKNOWN.” This standby db is belong to of our 11.2.0.4 RAC database which is running on AIX 7.1.

While I checked node1&2 alert log, there are some message such as:

ARC0: Archive log rejected (thread 1 sequence 5253) at host ‘(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=myhost)(PORT=1527))(CONNECT_DATA=(SERVICE_NAME=xxxDG)(SERVER=DEDICATED)))’
FAL[server, ARC0]: FAL archive failed, see trace file.
ARCH: FAL archive failed. Archiver continuing
ORACLE Instance XXX – Archival Error. Archiver continuing.
Error 1033 received logging on to the standby
PING[ARC2]: Heartbeat failed to connect to standby ‘(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=myhost)(PORT=1527))(CONNECT_DATA=(SERVICE_NAME=xxxDG)(SERVER=DEDICATED)))’. Error is 1033.

On Standby alert log:
Media Recovery Waiting for thread 1 sequence 5253
Fri Jan 08 09:19:32 2016

So if you got similar message like which I faced, You can follow below steps

1. On primary&standby side check your archivelog destionation to can be sure you have free space for archives

2. On primary be sure log_archive_dest_state_X parameter has been set to enable:

alter system set log_archive_dest_state_X=enable scope=both sid=’*’

Where X is the number of the destination used for shipping redo to the standby site.

3. If all okey check password file, if possible recerate passwd file on primary and copy it to standby.

4. After all those steps Monitor the redo apply from Primary to Standby.



Oracle Database Block corruption

$
0
0

Oracle Database Block corruptions

There are many possible causes of a block corruption including:

– Bad IO hardware / firmware
– OS problems
– Oracle problems
– Recovering through “UNRECOVERABLE” or “NOLOGGING” database actions (in which case ORA-01578&ORA-01110 is expected behavior)

Before start to post, let me remind you how many corruptions we have… There are 2 types of corruptions. Data corruption can manifest itself as a physical or a logical corruptions.

So Let us define all those,

Physical Corruption of a block manifests as an invalid checksum or header, or when the
block contains all zeroes. When that happens, the database will not recognize the block as a valid Oracle block, regardless of its content. Physical corruptions (media corrupt blocks) are blocks that have sustained obvious physical damage. When Oracle detects an inconsistency between the CSN in the block header and the CSN in the block footer, or the expected header and footer structures are not present or are mangled, then the Oracle session raises an exception upon read of the block (ORA-01578: ORACLE data block corrupted…). A physical corruption is also called a media corruption.

Logical Corruption happens when a data block has a valid checksum, etc., but the block
contents are logically inconsistent. Logical block corruption can also occur when the structure below the beginning of the block (below the block header) is corrupt. In this case, the block checksum is correct but the block structures may be corrupt. Logical corruption can also result from a lost write.

For more information, see My Oracle Support Note 840978.1.

So…When you can get this error messages? You may not got error message during access corrupted blocks until access related blocks

– Analyze table .. Validate structure
– Dbverify
– CTAS(Create table as Select)
– Export
– During RMAN process

All those Database utilities are populates V$DATABASE_BLOCK_CORRUPTION on detecting corruption:

Behavior in 9i and 10g, the view v$database_block_corruption used to get populated only when RMAN Backup validate&check logical validate command was run.

The populated information used to get refreshed only once the corruption was repaired (media recovery/Object dropped) and on re-run of the Rman Backup validate /check logical validate command on the database or the affected datafile.

With 11g this behavior has Changed.When any database utility or process encounters an intrablock corruption, it automatically records it in V$DATABASE_BLOCK_CORRUPTION.

The repair removes metadata about corrupt blocks from the view.

You can identify the objects containing a corrupt block using a query like this

SELECT DISTINCT owner, segment_name FROM v$database_block_corruption dbc JOIN dba_extents e ON dbc.file# = e.file_id AND dbc.block# BETWEEN e.block_id and e.block_id+e.blocks-1 ORDER BY 1,2;

Repair techniques include:

– block media recovery,
– restoring datafiles,
– recovering by means of incremental backups, and block newing.

Do not forget, Block media recovery can repair physical corruptions, but not logical corruptions.

Checking for Block Corruption with the VALIDATE Command
Syntax for Rman Validate Command :-

For Database :
RMAN > Validate database;

For Datafile :
RMAN > Validate datafile <file no>,<file no> ;

For Data block :
RMAN > Validate datafile <file no> block <Block no> ;

Archivelog restores for Block Media Recovery (BMR) can be run in parallel on multiple channels, but datafile/backupset scans and the recovery session must all run in the same server session.

To allow selection of which backup will be used to select the desired blocks,the blockrecover command supports options used in the restore command:

FROM BACKUPSET–> restore blocks from backupsets only
FROM DATAFILECOPY–> restore blocks from datafile copies only
FROM TAG–>restore blocks from tagged backup
RESTORE UNTIL TIME|SCN|LOGSEQ

So, after validate our db than how we can recover related corruptions? Here is the some examples:

Recovery using Explicit File/Block:

$ rman target / log=rman1.log

RMAN> blockrecover datafile 12 block 4207;

Recovery using Corruption list :

$ rman target / log=rman1.log

RMAN> blockrecover corruption list;

The key approach to detecting and preventing corrupted data is to perform the following MAA Best Practices.

• Use Oracle Data Guard
• Set the Oracle Database block corruption detection parameters
• Implement a backup and recovery strategy with Recovery Manager (RMAN)

There are too many documents available at metalink which are covers deeply explain concept with corruptions examples. So, I strongly suggest to review below docs while you are hitting similar errors on your system:

Handling Oracle Block Corruptions in Oracle7/8/8i/9i/10g/11g [ID 28814.1]
Master Note for Handling Oracle Database Corruption Issues [ID 1088018.1]
Data Recovery Advisor – Corruption Reference Guide [ID 1317849.1]
RMAN : Block-Level Media Recovery – Concept & Example [ID 144911.1]
OERR: ORA-1578 “ORACLE data block corrupted (file # %s, block # %s)” Master Note [ID 1578.1]
HOW TO TROUBLESHOOT AND RESOLVE an ORA-1110 [ID 434013.1]
11g New Feature V$Database_block_corruption Enhancements and Rman Validate Command [ID 471716.1]
http://www.oracle.com/technetwork/database/availability/maa-datacorruption-bestpractices-396464.pdf

Oracle Update Statement- To understand what is happening internally

$
0
0

Oracle Update Statement- To understand what is happening internally

We asume that we update or select data from one table in oracle database. We just type some sql and see data on our screen. But what is happening internally?

Here is the answer:

SQL>select * from emp;
SQL>update emp set sallary=30000 where empid=10;
SQL>commit;

So Let us see, what is happening internally.. Here is the steps

1. Once we hit sqlplus statement as above client process(user) access sqlnet listener.

2. Sqlnet listener confirms that DB is open for business & create server process.

3. Server process allocates PGA.

4. ‘Connected’ Message returned to user.

5. User run:
SQL>select * from emp;

6. Server process checks the SGA to see if data is already in buffer cache.

7. If not then data is retrived from disk and copied into SGA (DB Cache).

8. Data is returned to user via PGA & server process.

9. Now another statement is:
SQL>Update emp set sallary=30000 where empid=10;

10. Server process (Via PGA) checks SGA to see if data is already there in buffer cache.

11. In our situation chances are the data is still in the SGA (DB Cache).

12. Data updated in DB cache and mark as ‘Dirty Buffer’.

13. Update employee placed into redo buffer and undo segments

14. Row updated message returned to user

15. Now the next steps is :
SQL>commit;

16. Newest SCN obtained from control file.

17. Data in DB cache is marked as ‘Updated and ready for saving’.

18. commit palced into redo buffer.

19. LGWR writes redo buffer contents to redo log files & remove from redo buffer.

20. Control file is updated with new SCN.

21. Commit complete message return to user.

22. Update emp table in datafile & update header of datafile with latest SCN.

23. SQL>exit;

24. Unsaved changes are rolled back.

25. Server process deallocates PGA.

26. Server process terminates.

27. After some period of time redo log are archived by ARCH process.


ORA-15032&ORA-15177 while removing files in ASM

$
0
0

ORA-15032: not all alterations performed, ORA-15177: cannot operate on system aliases (DBD ERROR: OCIStmtExecute)

If you want to remove some folder in asm via using asmcmd utulity, than you may hit this error message.

Here the steps:

1. Let us see overview of current diskgroups and sizes:

[grid@myserver]</home/grid> asmcmd -p

ASMCMD [+] > lsdg
MOUNTED EXTERN N 512 4096 1048576 11534336 11009776 0 11009776 0 N ORADATA/
MOUNTED EXTERN N 512 4096 1048576   921600   921532 0   921532 0 N ORAFRA/
MOUNTED EXTERN N 512 4096 1048576    40960    40700 0    40700 0 N ORAREDO/

2. Let us see what we have under diskgroup before dropping:
ASMCMD [+] > cd ORADATA

ASMCMD [+ORADATA] > ls -l
Type Redund Striped Time Sys Name
Y ASM/
Y MYDBSID/
ASMCMD [+ORADATA] > rm MYDBSID/
ORA-15032: not all alterations performed
ORA-15177: cannot operate on system aliases (DBD ERROR: OCIStmtExecute)

3. Here is the solution, its simply use the force option

ASMCMD [+ORADATA] > rm -f MYDBSID


Welcome to Oracle 12c-good bye Oracle 11g

$
0
0

I remember as yesterday while I heard something about Oracle 11gR1 version has been released.

After many years We worked with Oracle 11gR1 and finally Oracle11gR2… Finally we have new family member since 2013… Oracle 12c..

Before start let us make some brain storm and remember some old information again:

I just made googling and collect some information together.

The first release of Oracle 11g (Oracle 11g Release 1) was released for Linux on 9 August 2007. The Windows port was released on 23 October. Unix versions (Solaris, AIX and HP-UX) were released on 12 November.

The common theme for this release is “Growing the Grid” (focus on Fusion Middleware, RAC and ASM).

Oracle 11gR2 (for Linux 32-bit and 64-bit) was released on 1 September 2009.Oracle 11gR2 is the second and terminal release of the Oracle 11g database. The common theme for this release is “Consolidate. Compress. Control.”.

Oracle 12c version 12.1.0.1 was released on 1 July 2013.The “c” stands for “cloud” to indicate that 12c is “cloud enabled”. It features a new multi-tenant option that will help companies to consolidate databases into private or public clouds.

There are many many post&docs available on internet. For those docs google search can be your best friends. Today or tomorrow you will need to upgrade&install&migrate to this new Oracle Database version.

So, here is the life-cycle of version:

Oracle-Support-Summary-700x270

here is the upgrade diagram:oracle_database_matrix

So, What Oracle 12c comes with?  As you can imagine it has lots of futures, Mainly:

* Container databases (CDB) with embedded Pluggable Databases (PDB)
* Automatic Data Optimization (ADO) with heat maps to automate ILM
* In-Database archiving and Temporal Validity
* Unified auditing
* Database privilege analysis to see who uses what privileges
* Data redaction
* Adaptive Query Optimization
* Database Migration Assistant for Unicode (DMU) replaces “csscan” and “csalter”
* Row Limiting Queries
* Increased Size Limit for VARCHAR2, NVARCHAR2, and RAW Data Types
* Online move of data files and partitions

Of course there are many Oracle futures available in that versions. Oracle docs are defines all those new futures under below topics:

– “Advanced Index Compression”
– “Approximate Count Distinct”
– “Attribute Clustering”
– “Automatic Big Table Caching”
– “FDA Support for CDBs”
– “Full Database Caching”
– “In-Memory Aggregation”
– “In-Memory Column Store”
– “JSON Support”
– “New FIPS 140 Parameter for Encryption”
– “PDB CONTAINERS Clause”
– “PDB File Placement in OMF”
– “PDB Logging Clause”
– “PDB Metadata Clone”
– “PDB Remote Clone”
– “PDB Snapshot Cloning Additional Platform Support”
– “PDB STANDBYS Clause”
– “PDB State Management Across CDB Restart”
– “PDB Subset Cloning”
– “Rapid Home Provisioning”
– “Zone Maps”

 

For all those futures, you can check below docs:
https://docs.oracle.com/database/121/NEWFT/chapter12102.htm
http://www.orafaq.com/wiki/Oracle

 


How to Performing Switchover on Oracle Standby Database

$
0
0

In this post, I want to share Switchover process on Dataguard Setup.

Db version is 11.2  2 node RAC and standby db is standalone

Operation system is  AIX 7.1

Data Guard Physical Standby Switchover Best Practices using SQL*Plus

1)Verify Managed Recovery is running on the standby

The following query at the standby verifies that managed recovery is running:
SQL> SELECT PROCESS FROM V$MANAGED_STANDBY WHERE PROCESS LIKE ‘MRP%’;

If managed standby recovery is not running or not started with real-time apply, restart managed recovery with real-time apply enabled:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

2)Verify there are no large Gaps
Identify the current sequence number for each thread on the primary database
SQL> SELECT THREAD#, SEQUENCE# FROM V$THREAD;

Verify the target physical standby database has applied up to, but not including the logs from the primary query.
On the standby the following query should be within 1 or 2 of the primary query result.

SQL> SELECT THREAD#, MAX(SEQUENCE#) FROM V$ARCHIVED_LOG
WHERE APPLIED = ‘YES’
AND RESETLOGS_CHANGE# = (SELECT RESETLOGS_CHANGE#
FROM V$DATABASE_INCARNATION WHERE STATUS = ‘CURRENT’)
GROUP BY THREAD#;

3)Verify that the primary database can be switched to the standby role
Query the SWITCHOVER_STATUS column of the V$DATABASE view on the primary database:
SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
—————–
TO STANDBY

4)If The Primary is a RAC, then shutdown all secondary primary instances
Only one instance enough , shut down others

5)Switchover the primary to a standby database

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY WITH SESSION SHUTDOWN;

6)Verify that the standby database can be switched to the primary role

Query the SWITCHOVER_STATUS column of the V$DATABASE view on the standby database:

SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
—————–
TO PRIMARY

A value of TO PRIMARY or SESSIONS ACTIVE indicates that the standby database is ready to be switched to the primary role.
If neither of these values is returned, verify that redo apply is active and that redo transport is configured and working properly.
Continue to query this column until the value returned is either TO PRIMARY or SESSIONS ACTIVE.

7)Switchover the standby database to a primary

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;

8)Open the new primary database
SQL> ALTER DATABASE OPEN;

9)Restart the new standby
If the new standby database (former primary database) was not shutdown since switching it to standby, bring it to the mount state and start managed recovery.
This can be done in parallel to the new primary open.

SQL> SHUTDOWN ABORT;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;


Physical Standby Switchover_status Showing Not Allowed

$
0
0

Physical Standby Switchover_status Showing Not Allowed

We try to make some switchover test on our RAC db(2 nodes setup) which has Physical Standby. Do not forget, before start your switchover test, you need to close one of the member of the rac instance.

After close one of instance We checked below query result;

Step 1)Verify Managed Recovery is running on the standby

The following query at the standby verifies that managed recovery is running:
SQL> SELECT PROCESS FROM V$MANAGED_STANDBY WHERE PROCESS LIKE ‘MRP%’;

If managed standby recovery is not running or not started with real-time apply, restart managed recovery with real-time apply enabled:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

Step 2)Verify there are no large Gaps

Identify the current sequence number for each thread on the primary database

SQL> SELECT THREAD#, SEQUENCE# FROM V$THREAD;

Verify the target physical standby database has applied up to, but not including the logs from the primary query.
On the standby the following query should be within 1 or 2 of the primary query result.

SQL> SELECT THREAD#, MAX(SEQUENCE#) FROM V$ARCHIVED_LOG
WHERE APPLIED = ‘YES’
AND RESETLOGS_CHANGE# = (SELECT RESETLOGS_CHANGE#
FROM V$DATABASE_INCARNATION WHERE STATUS = ‘CURRENT’)
GROUP BY THREAD#;

Step 3)Verify that the primary database can be switched to the standby role

Query the SWITCHOVER_STATUS column of the V$DATABASE view on the primary database:
SQL> SELECT SWITCHOVER_STATUS FROM V$DATABASE;

SWITCHOVER_STATUS
—————–
TO STANDBY

In that step, Query returned ” NOT ALLOWED “… So We started to investigate why SWITCHOVER_STATUS shows NOT ALLOWED.

In Oracle documentation explain SWITCHOVER_STATUS column of v$database can have the following values:

NOT ALLOWED – Either this is a standby database and the primary database has not been switched first, or this is a primary database and there are no standby databases.

SESSIONS ACTIVE – Indicates that there are active SQL sessions attached to the primary or standby database that need to be disconnected before the switchover operation is permitted.

SWITCHOVER PENDING – This is a standby database and the primary database switchover request has been received but not processed.

SWITCHOVER LATENT – The switchover was in pending mode, but did not complete and went back to the primary database.

TO PRIMARY – This is a standby database, with no active sessions, that is allowed to switch over to a primary database.

TO STANDBY – This is a primary database, with no active sessions, that is allowed to switch over to a standby database.

RECOVERY NEEDED – This is a standby database that has not received the switchover request.

We check the synchronization status between primary and physical standby. They are no gap and any issues on sync. Physical standby has applied the lastly generated archived redo log sequence. But We still keep to get same result from query… v$database switchover_status shows “not allowed”

While We make search at metalink We found this note:  Physical Standby Switchover_status Showing Not Allowed. (Doc ID 1392763.1) 
From note:
It is expected to see this status in physical standby.When we are certain that Primary and target standby are in sync,We can then proceed with switchover exercise regardless of “not allowed” status in Physical stand

Switchover always originates from Primary database. On the request of switchover sql statement “alter database commit to switchover to physical standby with session shutdown”, Primary will generate special marker called EOR (end-of-redo) that is placed in the header of online redo log sequence. So this online redo log sequence will be archived locally and sent to all standby databases.

Only upon receiving and applying EOR (end-of-redo), v$database.switchover_status will change from “not allowed” to “to primary” or “sessions active”.

So this is expected value for $database.switchover_status cloumn. We keep going switchover test and it goes so smooth.

You can find How to  Performing Switchover on Oracle Standby Database steps in here

 


Oracle RAC log files locations

$
0
0

If you are using RAC system (doesnt matter how many nodes you have) You need to know where log files are located.

In this post, I share those log files locations.

Here We go;

The Cluster Ready Services Daemon (crsd) Log Files

Log files for the CRSD process (crsd) can be found in the following directories:

                 CRS home/log/hostname/crsd

Oracle Cluster Registry (OCR) Log Files

The Oracle Cluster Registry (OCR) records log information in the following location:

                CRS Home/log/hostname/client

Cluster Synchronization Services (CSS) Log Files

You can find CSS information that the OCSSD generates in log files in the following locations:

                CRS Home/log/hostname/cssd

Event Manager (EVM) Log Files

Event Manager (EVM) information generated by evmd is recorded in log files in the following locations:

                CRS Home/log/hostname/evmd

RACG Log Files

The Oracle RAC high availability trace files are located in the following two locations:

CRS home/log/hostname/racg

$ORACLE_HOME/log/hostname/racg

Core files are in the sub-directories of the log directories. Each RACG executable has a sub-directory assigned exclusively for that executable. The name of the RACG executable sub-directory is the same as the name of the executable.

You can follow below table which define locations of logs files:

Oracle Clusterware log files

Cluster Ready Services Daemon (crsd) Log Files:
$CRS_HOME/log/hostname/crsd

Cluster Synchronization Services (CSS):
$CRS_HOME/log/hostname/cssd

Event Manager (EVM) information generated by evmd:
$CRS_HOME/log/hostname/evmd

Oracle RAC RACG:
$CRS_HOME/log/hostname/racg
$ORACLE_HOME/log/hostname/racg

Oracle RAC 11g Release 2 log files

Clusterware alert log:
$GRID_HOME/log/<host>/alert<host>.log

Disk Monitor daemon:
$GRID_HOME/log/<host>/diskmon

OCRDUMP, OCRCHECK, OCRCONFIG, CRSCTL:
$GRID_HOME/log/<host>/client

Cluster Time Synchronization Service:
$GRID_HOME/log/<host>/ctssd

Grid Interprocess Communication daemon:
$GRID_HOME/log/<host>/gipcd

Oracle High Availability Services daemon:
$GRID_HOME/log/<host>/ohasd

Cluster Ready Services daemon:
$GRID_HOME/log/<host>/crsd

Grid Plug and Play daemon:
$GRID_HOME/log/<host>/gpnpd:

Mulitcast Domain Name Service daemon:
$GRID_HOME/log/<host>/mdnsd

Event Manager daemon:
$GRID_HOME/log/<host>/evmd

RAC RACG (only used if pre-11.1 database is installed):
$GRID_HOME/log/<host>/racg

Cluster Synchronization Service daemon:
$GRID_HOME/log/<host>/cssd

Server Manager:
$GRID_HOME/log/<host>/srvm

HA Service Daemon Agent:
$GRID_HOME/log/<host>/agent/ohasd/oraagent_oracle11

HA Service Daemon CSS Agent:
$GRID_HOME/log/<host>/agent/ohasd/oracssdagent_root

HA Service Daemon ocssd Monitor Agent:
$GRID_HOME/log/<host>/agent/ohasd/oracssdmonitor_root

HA Service Daemon Oracle Root Agent:
$GRID_HOME/log/<host>/agent/ohasd/orarootagent_root

CRS Daemon Oracle Agent:
$GRID_HOME/log/<host>/agent/crsd/oraagent_oracle11

CRS Daemon Oracle Root Agent:
$GRID_HOME/log/<host> agent/crsd/orarootagent_root

Grid Naming Service daemon:
$GRID_HOME/log/<host>/gnsd



DataPump Import (IMPDP) Fails With Errors ORA-39083 & ORA-01861

$
0
0

I face with this error message while I am doing import process on 11g database.

My source db version is 10g( 10.2.0.4.3 ) and OS is AIX 6.1 and my target db version is 11g RAC( 11.2.0.4.6 ) and OS is AIX 7.1.

The errors:
ORA-39083: Object type PASSWORD_HISTORY failed to create with error:
ORA-01861: literal does not match format string

From related MOS document:

The problem is that DBMS_PSWMG_IMPORT.IMPORT_HISTORY expects passwords to be in ‘yyyy/mm/dd’ format.

The issue is discussed in related bugs
Bug 13039027 – IPB01D : IMPDP ORA-39083: OBJECT TYPE PASSWORD_HISTORY FAILED TO CREATE WI
Bug 14521182 – DATAPUMP IMPORT PASSWORD_HISTORY ORA-39083,ORA-01847,ORA-01861
which are both closed with status ‘Not a Bug’.

For your information, because of security reasons I changed my schema name as SCOOT in below logs.

Complete logs here:
Master table “SYS”.”SYS_IMPORT_FULL_04″ successfully loaded/unloaded
Starting “SYS”.”SYS_IMPORT_FULL_04″: “/******** AS SYSDBA” directory=EXPDP_DIR dumpfile=SCOOT%U.dmp PARALLEL=8 logfile=impdp_SCOOT.log cluster=N DATA_OPTIONS=DISABLE_APPEND_HINT
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:”SCOOT” already exists
ORA-31684: Object type USER:”SCOOTUSR1″ already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PASSWORD_HISTORY
ORA-39083: Object type PASSWORD_HISTORY failed to create with error:
ORA-01861: literal does not match format string
Failing sql is:
DECLARE SUBTYPE HIST_RECORD IS SYS.DBMS_PSWMG_IMPORT.ARRAYOFHISTORYRECORDS; HIST_REC HIST_RECORD; i number := 0; BEGIN i := i+1; HIST_REC(i).USERNAME := ‘SCOOT’; HIST_REC(i).PASSWORD := ‘F99CA5C8EC2F8836′; HIST_REC(i).PASSWD_DATE := ’16-03-2015 15:03:34’; i := i+1; HIST_REC(i).USERNAME := ‘SCOOT’; HIST_REC(i).PASSWORD := ’98F7AE6C84E7A2E8′; HIST_REC(i).PASSWD_DATE :=
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

So here is the solutions:

1. At 10g source environment, first set the NLS_DATE_FORMAT environment variable:

$ export NLS_DATE_FORMAT=YYYY/MM/DD HH24:MI:SS

PS: If you are using AIX than you may hit this error message:
ksh: HH24:MI:SS: is not an identifier

If you hit this message you can use below syntax:

$ export NLS_DATE_FORMAT=”YYYY/MM/DD HH24:MI:SS”

verify:
$ echo $NLS_DATE_FORMAT
YYYY/MM/DD HH24:MI:SS

2. Re-run the Datapump export.

3. Perform the Datapump import into the 11g database using the new dump file.

Reference:
ORA-39083 and ORA-06502 during IMPDP for PASSWORD_HISTORY (Doc ID 1553898.1)
DataPump Import (IMPDP) Fails With Errors ORA-39083 ORA-6550 PLS-00103 On Object Type PASSWORD_HISTORY [ID 1053162.1]


Data Pump Import Fails With ORA-38500: Unsupported operation: Oracle XML DB not present

$
0
0

I face with this error message during export,import process in scheduled crontab job.

Source and Target DB versions are both 11.2.0.4.6 and OS are AIX 7.1

After investigate issue We noticed, you can face this error because of 2 reasons.

1. The XML feature is not installed. Because we are attempting to invoke XML/XDB features during the import, the error is raised.

You need to Query view dba_registry to check if XDB and XML components are installed and valid on both source and target databases:

select comp_id,comp_name,version,status from dba_registry;

If you need to install XML feature you can use this note:
Master Note for Oracle XML Database (XDB) Install / Deinstall [ID 1292089.1]
Data Pump Import Fails With ORA-942 ORA-06512 And ORA-38500 [ID 1350414.1]

2. If the involved table is a standard table and does not use XML DB. So why is the XML DB error reported?

The issue is generated when tables have differences.The error is happening due to the metadata differences of the exported table in the dumpfile and the existing table in the target DB.

In general, whenever there is a difference between metadata of a table from a dumpfile and a pre-existing table in a target DB, the IMPDP code tries to find the difference between the metadata of the table to find the correct metadata. To find the metadata difference, we use metadata code which internally uses the XDB feature to differentiate metadata XML.

If you repeat export&import process than  the DataPump import will be succeeded.

For more details you can refer this note:
Ora-38500 Raised By IMPDP For Table That Does Not Use XML DB (Doc ID 1424643.1)


Viewing all 30 articles
Browse latest View live