ORA-39126: Data Pump Worker Fatal Error - Fix Export/Import
ORA-39126: Worker Unexpected Fatal Error in KUPW$WORKER
Section titled “ORA-39126: Worker Unexpected Fatal Error in KUPW$WORKER”Error Overview
Section titled “Error Overview”Error Text: ORA-39126: Worker unexpected fatal error in KUPW$WORKER.MAIN [<context>]
ORA-39126 occurs when a Data Pump worker process crashes unexpectedly during an export (expdp) or import (impdp) operation. The worker process KUPW$WORKER is the internal PL/SQL-based engine that drives object-level processing in Data Pump. When a worker dies with an unhandled exception, Oracle raises ORA-39126 along with contextual information in brackets that identifies what the worker was doing when it failed.
The error terminates the worker but does not necessarily abort the entire Data Pump job — other workers may continue. However, objects being processed by the failed worker will be skipped or partially processed. ORA-39126 always appears alongside other errors that identify the root cause; the bracketed context (e.g., [PARSE], [FETCH], [WRITE]) points to the phase that failed.
Common Causes
Section titled “Common Causes”1. Corrupt or Inconsistent Database Object
Section titled “1. Corrupt or Inconsistent Database Object”- A table, index, or other object has internal corruption that prevents Data Pump from reading it
- An object’s metadata (stored in the data dictionary) is inconsistent with its physical structure
- Partially created or dropped object left in an inconsistent state
2. Insufficient PGA Memory
Section titled “2. Insufficient PGA Memory”- The worker process ran out of PGA memory while processing a large or complex object
PGA_AGGREGATE_LIMIThit during a complex LOB or XMLType export- SORT or HASH operations during index-organized table export consumed all available PGA
3. Oracle Bug or Version-Specific Issue
Section titled “3. Oracle Bug or Version-Specific Issue”- A known Oracle bug in the specific release affects a particular object type
- Combination of object features (compression + LOB + partitioning) triggers a worker crash
- Applying a database patch resolves the crash
4. Privileges or Object Access Issues
Section titled “4. Privileges or Object Access Issues”- Export user lacks privileges to read a dependent object referenced in the DDL
- A view definition references an object the exporting user cannot access
- Database link referenced by an object is invalid and causes the worker to fail
5. Invalid or Inaccessible Objects
Section titled “5. Invalid or Inaccessible Objects”- An object in the export scope is INVALID and its DDL cannot be generated
- A package or trigger references a table that no longer exists
- Object statistics or metadata accessible through the export path are corrupted
Diagnostic Queries
Section titled “Diagnostic Queries”Find ORA-39126 Details in the Data Pump Log and Trace Files
Section titled “Find ORA-39126 Details in the Data Pump Log and Trace Files”-- Find recent Data Pump jobs and their status:SELECT job_name, operation, job_mode, state, degree, TO_CHAR(start_time, 'YYYY-MM-DD HH24:MI:SS') AS start_time, TO_CHAR(end_time, 'YYYY-MM-DD HH24:MI:SS') AS end_time, attached_sessionsFROM dba_datapump_jobsORDER BY start_time DESC;
-- Check what objects the failed job was processing:SELECT log_time, messageFROM dba_datapump_logWHERE job_name = '&job_name' AND (message LIKE '%ORA-%' OR message LIKE '%error%')ORDER BY log_time;Identify the Problem Object From the Trace File
Section titled “Identify the Problem Object From the Trace File”-- Find the Data Pump trace file directory:SELECT value FROM v$parameter WHERE name = 'diagnostic_dest';-- Trace files are in: <diagnostic_dest>/diag/rdbms/<db_name>/<instance>/trace/-- Look for files named: dp*.trc or kupw*.trc created around the failure time
-- Check for invalid objects in the export schema:SELECT owner, object_name, object_type, status, last_ddl_timeFROM dba_objectsWHERE owner = UPPER('&schema_name') AND status = 'INVALID'ORDER BY object_type, object_name;
-- Check for objects with accessibility issues:SELECT owner, object_name, object_type, statusFROM dba_objectsWHERE owner = UPPER('&schema_name') AND object_type IN ('TABLE', 'INDEX', 'VIEW', 'PACKAGE', 'PROCEDURE', 'FUNCTION') AND status = 'INVALID'ORDER BY object_type;Check for Object-Level Corruption
Section titled “Check for Object-Level Corruption”-- Check for corrupt segments in the export schema:SELECT owner, segment_name, segment_type, partition_nameFROM dba_segmentsWHERE owner = UPPER('&schema_name') AND ( -- Check via DBMS_SPACE_ADMIN if corruption is suspected: segment_name IN ( SELECT segment_name FROM dba_extents WHERE owner = UPPER('&schema_name') MINUS SELECT table_name FROM dba_tables WHERE owner = UPPER('&schema_name') MINUS SELECT index_name FROM dba_indexes WHERE owner = UPPER('&schema_name') ) );
-- Check data dictionary consistency for the schema:SELECT table_name, num_rows, last_analyzed, statusFROM dba_tablesWHERE owner = UPPER('&schema_name') AND status != 'VALID'ORDER BY table_name;
-- Identify LOB segments and their integrity:SELECT l.owner, l.table_name, l.column_name, l.segment_name, s.bytes / 1024 / 1024 AS lob_size_mbFROM dba_lobs lJOIN dba_segments s ON l.segment_name = s.segment_name AND l.owner = s.ownerWHERE l.owner = UPPER('&schema_name')ORDER BY s.bytes DESC;Check PGA Usage During Data Pump
Section titled “Check PGA Usage During Data Pump”-- Current PGA usage and limits:SELECT name, value / 1024 / 1024 AS value_mbFROM v$pgastatWHERE name IN ( 'total PGA inuse', 'total PGA allocated', 'maximum PGA allocated', 'PGA memory freed back to OS')ORDER BY name;
-- PGA parameters:SELECT name, valueFROM v$parameterWHERE name IN ('pga_aggregate_target', 'pga_aggregate_limit', 'workarea_size_policy')ORDER BY name;
-- Sessions consuming large PGA (during active Data Pump job):SELECT s.sid, s.serial#, s.username, s.program, p.pga_used_mem / 1024 / 1024 AS pga_used_mb, p.pga_alloc_mem / 1024 / 1024 AS pga_alloc_mbFROM v$session sJOIN v$process p ON s.paddr = p.addrWHERE s.username IS NOT NULLORDER BY p.pga_alloc_mem DESCFETCH FIRST 20 ROWS ONLY;Step-by-Step Resolution
Section titled “Step-by-Step Resolution”1. Locate and Read the Worker Trace File
Section titled “1. Locate and Read the Worker Trace File”# Find trace files generated around the time of the failure:ls -lt $ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID/trace/*.trc | head -20
# Search for ORA-39126 context in trace files:grep -l "ORA-39126\|KUPW" $ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID/trace/*.trc | head -5
# View the relevant trace file:tail -200 /path/to/relevant_trace.trc | grep -A 20 "ORA-39126\|fatal"2. Identify and Skip the Problem Object
Section titled “2. Identify and Skip the Problem Object”If the trace or log identifies the failing object, exclude it from the export:
# Run expdp excluding the problematic object:expdp system/password \ DIRECTORY=DATA_PUMP_DIR \ DUMPFILE=export_%U.dmp \ LOGFILE=export.log \ SCHEMAS=SCHEMA_NAME \ EXCLUDE=TABLE:"IN ('PROBLEM_TABLE1', 'PROBLEM_TABLE2')"
# Or exclude specific object types:expdp system/password \ DIRECTORY=DATA_PUMP_DIR \ DUMPFILE=export_%U.dmp \ SCHEMAS=SCHEMA_NAME \ EXCLUDE=STATISTICS3. Increase PGA Memory
Section titled “3. Increase PGA Memory”-- Temporarily increase PGA for the Data Pump session:ALTER SYSTEM SET pga_aggregate_target = 4G SCOPE=MEMORY;ALTER SYSTEM SET pga_aggregate_limit = 8G SCOPE=MEMORY;
-- Or increase for the specific session (if connecting interactively):ALTER SESSION SET workarea_size_policy = MANUAL;ALTER SESSION SET sort_area_size = 104857600; -- 100 MBThen retry the Data Pump job:
expdp system/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=export.dmp SCHEMAS=SCHEMA_NAME4. Fix Invalid Objects Before Export
Section titled “4. Fix Invalid Objects Before Export”-- Recompile all invalid objects in the schema:BEGIN DBMS_UTILITY.COMPILE_SCHEMA( schema => 'SCHEMA_NAME', compile_all => FALSE );END;/
-- Or use utlrp.sql for all schemas:@?/rdbms/admin/utlrp.sql
-- Verify no invalid objects remain:SELECT object_name, object_type, statusFROM dba_objectsWHERE owner = 'SCHEMA_NAME' AND status = 'INVALID';5. Export Only Specific Object Types (Divide and Conquer)
Section titled “5. Export Only Specific Object Types (Divide and Conquer)”# Export metadata only first to identify what fails:expdp system/password \ DIRECTORY=DATA_PUMP_DIR \ DUMPFILE=metadata_only.dmp \ SCHEMAS=SCHEMA_NAME \ CONTENT=METADATA_ONLY
# Then export data only:expdp system/password \ DIRECTORY=DATA_PUMP_DIR \ DUMPFILE=data_only.dmp \ SCHEMAS=SCHEMA_NAME \ CONTENT=DATA_ONLY
# Narrow down: export table by tableexpdp system/password \ DIRECTORY=DATA_PUMP_DIR \ DUMPFILE=single_table.dmp \ TABLES=SCHEMA_NAME.SPECIFIC_TABLE6. Attach to a Failed Job and Restart or Kill It
Section titled “6. Attach to a Failed Job and Restart or Kill It”# Attach to the failed Data Pump job:expdp system/password ATTACH=JOB_NAME
# At the Export> prompt:# STATUS -- Check current status# CONTINUE_CLIENT -- Resume if paused# KILL_JOB -- Terminate if unrecoverable-- Or kill from SQL*Plus if the job is stuck:SELECT job_name, state FROM dba_datapump_jobs WHERE state != 'NOT RUNNING';
-- Kill the job:BEGIN DBMS_DATAPUMP.STOP_JOB( handle => DBMS_DATAPUMP.OPEN('EXPORT', 'SCHEMA', NULL, 'JOB_NAME'), immediate => 1, keep_master => 0 );END;/7. Apply Oracle Patches for Known Bugs
Section titled “7. Apply Oracle Patches for Known Bugs”# Check current patch level:$ORACLE_HOME/OPatch/opatch lsinventory | grep -i "patch\|PSU\|RU"
# Search My Oracle Support for the specific ORA-39126 context string# (the text in brackets) to find known bug fixes.# Common fix: apply the latest Database Release Update (RU).Prevention Strategies
Section titled “Prevention Strategies”1. Validate Schema Objects Before Running Data Pump
Section titled “1. Validate Schema Objects Before Running Data Pump”-- Pre-export validation script:SELECT object_type, COUNT(*) AS invalid_countFROM dba_objectsWHERE owner = 'SCHEMA_NAME' AND status = 'INVALID'GROUP BY object_type;-- Expect zero rows. Fix any invalid objects before exporting.2. Use PARALLEL Parameter Carefully
Section titled “2. Use PARALLEL Parameter Carefully”# Reduce parallel degree when workers are crashing:expdp system/password \ DIRECTORY=DATA_PUMP_DIR \ DUMPFILE=export_%U.dmp \ SCHEMAS=SCHEMA_NAME \ PARALLEL=1 # Single worker — easier to diagnose, no concurrent worker crashes3. Export Statistics Separately
Section titled “3. Export Statistics Separately”# Exclude statistics from the main export (avoids stats-related worker crashes):expdp system/password \ DIRECTORY=DATA_PUMP_DIR \ DUMPFILE=export.dmp \ SCHEMAS=SCHEMA_NAME \ EXCLUDE=STATISTICS
# Separately use DBMS_STATS.EXPORT_SCHEMA_STATS for statistics:BEGIN DBMS_STATS.EXPORT_SCHEMA_STATS( ownname => 'SCHEMA_NAME', stattab => 'STATS_BACKUP', statid => 'PRE_EXPORT', statown => 'SCHEMA_NAME' );END;/4. Test Data Pump on Dev Before Production Exports
Section titled “4. Test Data Pump on Dev Before Production Exports”# Run a metadata-only export on dev after every schema change:expdp system/password \ DIRECTORY=DATA_PUMP_DIR \ DUMPFILE=dev_meta_check.dmp \ SCHEMAS=SCHEMA_NAME \ CONTENT=METADATA_ONLY \ LOGFILE=dev_meta_check.log# Verify no errors in the log before running full production export.Related Errors
Section titled “Related Errors”- ORA-39083 - Object type failed to create (import side)
- ORA-39166 - Object was not found or could not be exported
- ORA-31693 - Table data object failed to load/unload
- ORA-06512 - At line (PL/SQL stack trace)
Emergency Response
Section titled “Emergency Response”Quick Workaround — Skip the Problem Object
Section titled “Quick Workaround — Skip the Problem Object”# Identify the failing object from the Data Pump log, then exclude it:expdp system/password \ DIRECTORY=DATA_PUMP_DIR \ DUMPFILE=export_skip.dmp \ SCHEMAS=SCHEMA_NAME \ EXCLUDE=TABLE:"= 'PROBLEM_TABLE'"If the Job Is Stuck
Section titled “If the Job Is Stuck”# Kill and restart:expdp system/password ATTACH=JOB_NAMEExport> KILL_JOB# Then re-run with the problematic object excluded.Post-Resolution Validation
Section titled “Post-Resolution Validation”-- After successful export, verify the dump file contains expected objects:-- impdp with SQLFILE to generate a SQL script from the dump (no actual import):-- impdp system/password DIRECTORY=DATA_PUMP_DIR DUMPFILE=export.dmp SQLFILE=check.sql
-- Count objects per type in the dump:-- Review the generated SQL file for expected CREATE statements.