Wednesday, December 18, 2019

Things to Know when you do 19c upgrade

Oracle 19c Upgrade Checklist and Best Practices

This document outlines key considerations and recommended steps when performing an upgrade to Oracle Database 19c, focusing on best practices for a smooth transition and optimal post-upgrade performance.

1. Recommended Upgrade Method

  • autoupgrade.jar: The autoupgrade.jar utility is the recommended and most robust way to perform Oracle 19c upgrades. It automates many pre-checks, pre-upgrade fixes, and post-upgrade tasks, simplifying the process and reducing manual errors.

2. Pre-Upgrade Checks

Before initiating the upgrade, ensure the following:

a. Dictionary Statistics

Verify that dictionary and fixed object statistics have been gathered recently. This is crucial for the optimizer's performance during and after the upgrade.

column OPERATION format a40
set linesize 200
select to_char(max(END_TIME),'DD-MON-YY hh24:mi') LATEST, OPERATION
from DBA_OPTSTAT_OPERATIONS
where OPERATION in ('gather_dictionary_stats','gather_fixed_objects_stats')
group by operation;

b. Stats on Clustered Indexes (If not using autoupgrade.jar)

If you are not using autoupgrade.jar (which typically handles this), it's recommended to gather statistics on critical SYS schema clustered indexes. This helps the optimizer in the new version.

exec dbms_stats.gather_schema_stats('SYS');
exec dbms_stats.gather_index_stats('SYS','I_OBJ#');
exec dbms_stats.gather_index_stats('SYS','I_FILE#_BLOCK#');
exec dbms_stats.gather_index_stats('SYS','I_TS#');
exec dbms_stats.gather_index_stats('SYS','I_USER#');
exec dbms_stats.gather_index_stats('SYS','I_TOID_VERSION#');
exec dbms_stats.gather_index_stats('SYS','I_MLOG#');
exec dbms_stats.gather_index_stats('SYS','I_RG#');

3. Post-Upgrade Actions

After the upgrade is complete, consider these immediate actions:

  • Adjust Stats History Retention:

    exec DBMS_STATS.ALTER_STATS_HISTORY_RETENTION(14);
    

    This sets the statistics history retention to 14 days.

  • Set Key Parameters in SPFILE:

    • _cursor_obsolete_threshold=1024

    • deferred_segment_creation=false

    • _sql_plan_directive_mgmt_control=0

    • Set optimizer_adaptive_statistics=FALSE explicitly in your SPFILE (It's recommended to explicitly set this to FALSE as adaptive statistics can sometimes lead to unexpected plan changes.)

4. Optimizer Parameters

  • COMPATIBLE and OPTIMIZER_FEATURES_ENABLE:

    • Ensure the COMPATIBLE parameter is set to the latest version (e.g., 19.0.0).

    • The OPTIMIZER_FEATURES_ENABLE parameter should also be set to the latest version ('19.1.0') to leverage the latest optimizer enhancements.

5. Performance Analysis and Tuning

a. Collect Execution Plans Before Upgrade

Capture existing execution plans to compare them after the upgrade and identify any regressions.

  • From Cursor Cache: Query V$SQL_PLAN or GV$SQL_PLAN for active and frequently executed SQL statements.

  • Using AWR: Analyze AWR reports for top SQL statements.

  • SQL Tuning Sets (STS): The most robust method. Create an STS from the AWR or cursor cache to capture SQL statements, their execution statistics, and execution plans.

    • This allows you to replay the workload later using SQL Performance Analyzer (SPA).

b. Compare AWR Snapshots

  • AWRDDRPT.sql: Use the AWRDDRPT.sql script (located in $ORACLE_HOME/rdbms/admin) to generate AWR Diff reports. This allows you to compare performance metrics between AWR snapshots taken before and after the upgrade.

  • Export AWR Data: You can export AWR data using the awrexp script (also in $ORACLE_HOME/rdbms/admin) to analyze it on a different database or for long-term storage.

c. SQL Tuning Sets (STS) and SQL Performance Analyzer (SPA)

  • Capture STS: Capture a representative workload into a SQL Tuning Set.

  • Load STS: Load this STS into the upgraded database.

  • SQL Performance Analyzer (SPA): Use SPA (part of Real Application Testing) to compare the performance of the SQL statements in the STS before and after the upgrade. SPA identifies SQL statements with plan changes or performance regressions.

d. SQL Plan Management (SPM)

SPM is a powerful feature to control and stabilize execution plans.

  • Configuration:

    • DBMS_SPM.CONFIGURE('PLAN_RETENTION_WEEKS', 5); (Default is 53 weeks)

    • DBMS_SPM.CONFIGURE('SPACE_BUDGET_PERCENT', 5); (Default is 10%)

  • Baseline Capture:

    • OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES = TRUE (Set this to start recording new plans as baselines. Remember to turn it off after capturing.)

  • Baseline Selection/Usage:

    • OPTIMIZER_USE_SQL_PLAN_BASELINES = TRUE (Ensures the optimizer uses existing baselines.)

    • OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES = FALSE (Turn off capture during normal operation.)

  • Evolution:

    • DBMS_SPM.REPORT_AUTO_EVOLVE_TASK: Reports on the automatic evolution task.

    • DBMS_SPM.CREATE_EVOLVE_TASK: Manually creates a task to evolve (verify and accept) new plans into baselines.

e. SQL Tuning Advisor (STA)

  • Utilize the SQL Tuning Advisor to analyze problematic SQL statements identified during post-upgrade testing. It can recommend various tuning actions, including new indexes, SQL profile creation, or SQL structure changes.

f. Export/Import STS to New DB

  • After capturing an STS from the source database, you can export it and import it into the target (upgraded) database for performance analysis.

  • DBMS_SPM.LOAD_PLANS_FROM_SQLSET: This procedure can be used to load plans from an STS into the SQL Plan Baseline (SPM) repository of the new database.

g. Workload Capture and Replay

  • Real Application Testing (RAT): This feature allows you to capture a real production workload from the source database and replay it on the upgraded database. This provides a highly accurate way to test the impact of the upgrade on performance.

    • SPA is a free feature, while Real Application Testing (which includes workload capture/replay) requires a separate license.

h. Automatic SPM (Exadata 19c)

  • On Exadata with Oracle 19c, Automatic SPM can further simplify SQL plan management by automatically managing baselines for frequently executed SQL.

By following these guidelines, you can significantly improve the success rate and performance stability of your Oracle 19c database upgrade.

-> Comptable (features) and optimiser_features_enable (use latest)  - Keep the latest

-> Collect execution plan before upgrade (cursor cache and AWR) [how to ?] [sql tunning sets]

-> compare AWR snapshots (AWRDDRPT.sql), You can export AWR data using the awrexp script in rdbms/admin

-> capture STS -> load STS (SQL performance analyser)

-> SPM  ( 53 week default - dbms_spm.configure('plan retention week',5) , (space_budget_percent',5)
   Baseline Capture -> optimiser_capture_sql_plan_baselines= TRUE (start recording ) 
   selection -> optimise_use_sql_plan_baselines= TRUE,OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE
   evolution -> dbms_spm.report_auto_evolve_task,  DBMS_SPM.CREATE_EVOLVE_TASK

-> SQL Tuning Advisor

-> export/import STS to new DB  

-> DBMS_SPM.LOAD_PLANS_FROM_SQLSET

-> capture workload, reply workload (compare)

-> SPA is Real Application Testing.SPM is a free feature

-> AUTOMATIC SPM - Exadata 19C 

Tuesday, December 3, 2019

How to upgrade DB from 12.1 to 19.3

Oracle Database 12.1 to 19.3 Upgrade Steps

This document provides a step-by-step guide for upgrading an Oracle Database from version 12.1 to 19.3. It outlines key actions, from software installation to post-upgrade tasks, to ensure a successful and efficient upgrade process.

Pre-Upgrade Preparations

  1. Install Oracle 19c Software:

    • Install the Oracle Database 19.3.0 software binaries into a new Oracle Home directory. Do not install it over your existing 12.1 Oracle Home. This new home will be referred to as 19.3_Home.

  2. Remove Obsolete init Parameters:

    • Review your current init.ora or SPFILE for any parameters that are no longer supported or are obsolete in Oracle 19c. Remove or adjust these parameters as necessary. Refer to Oracle documentation for a complete list of obsolete parameters.

  3. Stop the Listener:

    • Before proceeding with the database upgrade, stop the Oracle Listener associated with your 12.1 database.

    • lsnrctl stop

  4. Gather Dictionary and Fixed Objects Statistics:

    • It is critical to have up-to-date dictionary and fixed object statistics before starting the upgrade. This helps the upgrade process itself and ensures optimal performance post-upgrade.

    EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;
    EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
    
  5. Empty Recycle Bin:

    • Purge the recycle bin to avoid potential issues during the upgrade.

    PURGE DBA_RECYCLEBIN;
    
  6. Check and Update Time Zone File:

    • Verify the current time zone file version and update it to the latest version compatible with Oracle 19c if necessary. This is crucial for consistent time zone handling.

    • Refer to Oracle Support Note "Updating the Time Zone File and Timestamp with Time Zone Data in Oracle Database" for detailed instructions.

  7. Run Pre-Upgrade Information Tool:

    • Execute the preupgrade.jar tool from the 19c Oracle Home, pointing it to your 12.1 database. This tool performs a comprehensive analysis of your database for potential upgrade issues and generates fix-up scripts.

    (12.1_Home)/jdk/bin/java -jar (19.3_Home)/rdbms/admin/preupgrade.jar FILE TEXT DIR /home/oracle/upgrade
    
    • After running the tool, execute the generated fix-up script:

    @/home/oracle/upgrade/preupgrade_fixups.sql
    
    • Review the preupgrade.log and preupgrade_info.txt files for any remaining warnings or manual actions required.

Upgrade Execution

  1. Stop RAC Instances (if applicable):

    • If you are upgrading a Real Application Clusters (RAC) database, stop the second (and subsequent) RAC instances.

    • Disable the cluster on the first instance and then stop the first instance.

  2. Set Environment Variables for 19c:

    • Ensure your environment variables (especially ORACLE_HOME and PATH) are set to point to the new 19.3 Oracle Home.

  3. Start Database in Upgrade Mode:

    • Start the database from the 19.3 Oracle Home in upgrade mode.

    sqlplus / as sysdba
    startup upgrade
    
  4. Invoke Database Upgrade:

    • Execute the dbupgrade utility from the 19.3 Oracle Home. This command initiates the actual database upgrade process.

    (19.3_Home)/bin/dbupgrade
    
    • Monitor the output of this command closely for any errors or warnings.

Post-Upgrade Actions

  • After dbupgrade completes, the database will typically shut down.

  • Start the database in normal mode from the 19.3 Oracle Home.

  • Run the post-upgrade scripts and perform necessary post-upgrade checks as recommended by Oracle documentation (e.g., catupgrade.sql, utlrp.sql).

  • Re-enable cluster services and start all RAC instances if applicable.

  • Perform performance analysis and tuning as outlined in the "Oracle 19c Upgrade Checklist and Best Practices" document, including AWR comparisons, STS analysis, and SPM configuration.

Following these steps carefully will help ensure a successful upgrade of your Oracle 12.1 database to 19.3.

Wednesday, October 9, 2019

Wednesday, September 25, 2019

Oracle 19c Features and Enhancements

Oracle 19c Features and Enhancements

This document highlights several key features and enhancements introduced or improved in Oracle Database 19c, focusing on their functionality and practical implications.

1. Real-time Statistics (Exadata Only)

Oracle Database 19c introduces real-time statistics, extending online statistics gathering to include conventional DML statements. This feature allows statistics to be collected "on-the-fly" during INSERT, UPDATE, and MERGE operations.

  • Availability: This feature is only available on Exadata and Exadata Cloud Service.

  • Functionality: Statistics are gathered dynamically during conventional DML, providing the optimizer with up-to-date information for plan generation.

  • Parameters (Default TRUE):

    • _optimizer_gather_stats_on_conventional_dml: Controls whether real-time statistics are gathered.

    • _optimizer_use_stats_on_conventional_dml: Controls whether the optimizer uses real-time statistics.

    • _optimizer_stats_on_conventional_dml_sample_rate: Defaults to 100%, indicating the sampling rate for collection.

  • Disabling: You can set these parameters to FALSE to disable real-time statistics.

  • Impact on Dictionary Views:

    • USER_TABLES.NUM_ROWS: This column does not reflect real-time statistics changes.

    • USER_TAB_STATISTICS.NOTES: Will show STATS_ON_CONVENTIONAL_DML if real-time stats are active.

    • USER_TAB_COL_STATISTICS.NOTES: Will also show STATS_ON_CONVENTIONAL_DML for columns.

  • Limitations:

    • Direct Path INSERT ... SELECT: Real-time statistics have no effect.

    • DELETE operations: Real-time statistics have no effect.

    • Gathering statistics for a table (e.g., using DBMS_STATS) will wipe out the real-time statistics for that table.

2. High-Frequency Automatic Optimizer Statistics

This feature allows for more frequent, granular collection of optimizer statistics, improving the accuracy of execution plans.

  • Enabling/Configuring:

    EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_STATUS','ON'); -- Ensure auto tasks are on
    EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_INTERVAL','300'); -- Set interval to 300 seconds (5 minutes)
    
  • Interval: The minimum allowed value for AUTO_TASK_INTERVAL is 60 seconds, and the maximum is 900 seconds.

  • Monitoring: You can check the execution status of automatic statistics gathering tasks using:

    SELECT OPID, ORIGIN, STATUS, TO_CHAR(START_TIME, 'DD/MM HH24:MI:SS' ) AS BEGIN_TIME,
           TO_CHAR(END_TIME, 'DD/MM HH24:MI:SS') AS END_TIME, COMPLETED, FAILED,
           TIMED_OUT AS TIMEOUT, IN_PROGRESS AS INPROG
    FROM DBA_AUTO_STAT_EXECUTIONS
    ORDER BY OPID;
    

3. Validate SPFILE Parameters for Primary and Standby

Oracle 19c introduces a command to validate SPFILE parameters, which is particularly useful in Data Guard environments to ensure consistency between primary and standby databases.

  • Command:

    VALIDATE DATABASE {database-name} SPFILE;
    

    Replace {database-name} with the actual database name. This command checks for discrepancies or issues in the SPFILE.

4. Schema Only Accounts

Introduced earlier (from 9i onwards), schema-only accounts are a security best practice to restrict direct login access to schema owners.

  • Purpose:

    • Restrict Direct Access: Prevents users from directly logging in as the schema owner using shared credentials.

    • Proxy Connections: Users access the schema to perform DDL/DML changes via proxy connections, where a different user (with a password) connects and then proxies to the schema-only account.

    • Auditing: Allows for better auditing, as you can track which specific proxy user performed which tasks within the schema.

  • NO AUTHENTICATION Clause: When creating a schema-only account, the NO AUTHENTICATION clause is used. This allows a user to be created without a password, meaning direct connections are impossible, but proxy connections are enabled. This ensures that even if the schema owner account conceptually becomes "locked" (e.g., due to no password), proxy connections can still function.

5. Automatic Replication of Restore Points from Primary to Standby

In Oracle Data Guard, restore points can now be automatically replicated from the primary database to the standby database.

  • Mechanism: Restore points are replicated through the redo stream, which is then applied by the Managed Recovery Process (MRP) on the standby.

  • Primary Database State: For this replication to occur, the primary database must be in OPEN mode.

6. DML Operations on Active Data Guard Standby Databases

Oracle 19c enhances DML redirection capabilities for Active Data Guard, allowing DML operations to be performed on a read-only standby database by transparently redirecting them to the primary.

  • Enabling DML Redirect: This feature needs to be enabled on both the Primary and Standby databases.

    ALTER SYSTEM SET adg_redirect_dml=TRUE SCOPE=BOTH;
    
  • Connection Requirement: When connecting to the standby to perform DML that will be redirected, you must connect using a username/password. Connecting with sqlplus / as sysdba (OS authentication) will not work for DML redirection.

7. Inline External Table - EXTERNAL Clause (Zero DDL)

Oracle 19c introduces the ability to define external tables directly within a SQL query using the EXTERNAL clause, eliminating the need for separate DDL statements to create the external table object. This is often referred to as "Zero DDL" for external tables.

  • Concept: There is no need for an external table to be explicitly created as a database object. The definition is embedded directly in the SELECT statement.

  • Example: This example reads data from MY.txt (located in the MY_DIR directory), assuming it's a CSV file with three fields: object_id, owner, and object_name.

    SELECT *
    FROM EXTERNAL (
    (
    object_id NUMBER,
    owner VARCHAR2(128),
    object_name VARCHAR2(128)
    )
    TYPE oracle_loader
    DEFAULT DIRECTORY MY_DIR
    ACCESS PARAMETERS (
    RECORDS DELIMITED BY NEWLINE
    BADFILE MY_DIR
    LOGFILE MY_DIR:'inline_ext_tab_as_%a_%p.log'
    DISCARDFILE MY_DIR
    FIELDS CSV WITH EMBEDDED TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
    MISSING FIELD VALUES ARE NULL (
    object_id,
    owner,
    object_name)
    )
    LOCATION ('MY.txt')
    REJECT LIMIT UNLIMITED
    ) inline_ext_tab
    ORDER BY 1;
    
    • object_id NUMBER, owner VARCHAR2(128), object_name VARCHAR2(128): Defines the column structure of the external data.

    • TYPE oracle_loader: Specifies the access driver.

    • DEFAULT DIRECTORY MY_DIR: Specifies the database directory object where the external file is located.

    • ACCESS PARAMETERS (...): Defines how the data is parsed (e.g., RECORDS DELIMITED BY NEWLINE, FIELDS CSV).

    • LOCATION ('MY.txt'): Specifies the external data file.

    • REJECT LIMIT UNLIMITED: Allows all rows to be processed, even if some have errors.

    • inline_ext_tab: This is an alias for the inline external table definition.

Wednesday, July 31, 2019

Udev rules - SYMLINK - Device Persistence - Oracle ASM - Linux

Udev Rules for Oracle ASM Device Persistence on Linux

This document explains how to configure Udev rules in Linux to ensure persistent device naming for Oracle Automatic Storage Management (ASM) disks. This is crucial for maintaining stable disk paths across reboots, which is a requirement for ASM.

Understanding Device Persistence with Udev

In Linux, device names like /dev/sda, /dev/sdb, etc., are not guaranteed to be consistent across reboots. This can cause issues for applications like Oracle ASM, which rely on stable paths to storage. Udev is a device manager for the Linux kernel that allows you to define rules to create persistent symbolic links (symlinks) to your disks, ensuring they always have the same, predictable name regardless of their boot-time enumeration.

Steps to Configure Udev Rules for ASM

This example demonstrates the process for a single disk (/dev/sda), but the principles apply to multiple disks.

1. Present Raw Disks

Ensure that the raw disks intended for Oracle ASM are presented to the Linux operating system. These disks should be unpartitioned and not formatted with any file system.

2. Identify the SCSI ID of the Disk

The SCSI ID (or ID_SERIAL) provides a unique, persistent identifier for the disk. This is the key to creating a stable Udev rule.

  • Command:

    /lib/udev/scsi_id -g -u -d /dev/sda
    

    (Replace /dev/sda with the actual device path of your raw disk.)

  • Example Output:

    3600224800cbc991b76c2a957f833fc66
    

    This hexadecimal string is the unique SCSI ID for the disk.

3. Create/Update the Udev Rules File

Create a new Udev rules file (e.g., 99-asm.rules) in the /etc/udev/rules.d/ directory. The 99 prefix ensures that these rules are processed late in the Udev sequence, typically after other system-generated rules.

  • File: /etc/udev/rules.d/99-asm.rules

  • Content Example (for one disk):

    KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="3600224800cbc991b76c2a957f833fc66", SYMLINK+="asmdatadisk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
    
    • KERNEL=="sd*": Matches any block device with a name starting with sd (e.g., sda, sdb, sdc).

    • SUBSYSTEM=="block": Specifies that the rule applies to block devices.

    • ENV{DEVTYPE}=="disk": Ensures the rule applies only to whole disks, not partitions.

    • ENV{ID_SERIAL}=="3600224800cbc991b76c2a957f833fc66": This is the critical part. It matches the unique SCSI ID obtained in step 2.

    • SYMLINK+="asmdatadisk1": Creates a symbolic link named asmdatadisk1 in /dev/ (e.g., /dev/asmdatadisk1) that points to the actual device (e.g., /dev/sda). The += ensures that if other rules also create symlinks, this one is added.

    • OWNER="grid", GROUP="asmadmin", MODE="0660": Sets the ownership and permissions of the symlink.

      • OWNER="grid": Sets the owner to the grid OS user (typically the Oracle Grid Infrastructure owner).

      • GROUP="asmadmin": Sets the group to asmadmin (the ASM administrative group).

      • MODE="0660": Sets permissions to read/write for owner/group, and no access for others.

  • Important Note for Cloud Databases: The ENV{ID_SERIAL} attribute might not be consistently available or reliable for device persistence in some cloud environments (e.g., certain cloud-specific block storage types). In such cases, other attributes like ID_PATH or ID_WWN might be more appropriate, or cloud provider-specific persistent naming mechanisms should be used. The commented-out line in the original input (#ACTION=="add|change", ENV{ID_SCSI_SERIAL}=="...", SYMLINK+="asmdatadisk1", OWNER="grid", GROUP="asmadmin", MODE="0660") suggests an alternative that might be considered, but ID_SERIAL is generally preferred for on-premises setups.

4. Reload Udev Rules

After modifying the Udev rules file, you must reload the Udev rules and trigger the Udev system to apply the changes without requiring a system reboot.

  • Command:

    udevadm control --reload-rules && udevadm trigger
    

After these steps, you should see the new symlink in the /dev/ directory, pointing to your raw disk, and it will persist across reboots. You can then use this persistent symlink (e.g., /dev/asmdatadisk1) when configuring your ASM disk groups.

Tuesday, June 25, 2019

Flashback Data Archive (Oracle Total Recall)

Oracle Flashback Data Archive (Total Recall) Configuration

Oracle Flashback Data Archive (FDA), also known as Oracle Total Recall, provides the ability to track and store historical changes to table data. This feature allows users to query past states of data without relying on traditional backup and recovery mechanisms.

Flashback Data Archive in Oracle 11g

In Oracle Database 11g, FDA management is primarily performed at the individual table level.

STEP 1: Create a Tablespace for FDA

A dedicated tablespace is required to store the historical data.

CREATE TABLESPACE FDA DATAFILE '+DATA_DG' SIZE 100M AUTOEXTEND ON NEXT 100M MAXSIZE 30G;

STEP 2: Create Flashback Archive

Define the flashback archive, specifying its tablespace and retention period.

CREATE FLASHBACK ARCHIVE DEFAULT TABLESPACE FDA QUOTA 10G RETENTION YEAR;

STEP 3: Enable FDA on Respective Table

Enable FDA for a specific table.

ALTER TABLE your_table_name FLASHBACK ARCHIVE; -- Replace 'your_table_name' with the actual table name

STEP 4: Check FDA is Enabled on Respective Table

Verify the FDA status for tables.

SELECT owner_name, table_name, flashback_archive_name, archive_table_name, status
FROM dba_flashback_archive_tables
ORDER BY owner_name, table_name;

Flashback Data Archive in Oracle 12c and Later

Starting with Oracle Database 12c, FDA introduces the concept of managing logical groups of tables at an application level. This simplifies enabling or disabling FDA for multiple related tables.

STEP 1: Create a Tablespace for FDA

Similar to 11g, create a tablespace for FDA.

CREATE TABLESPACE FDA DATAFILE '+DATA_DG' SIZE 100M AUTOEXTEND ON NEXT 100M MAXSIZE 30G;

STEP 2: Create Flashback Archive and Grant Privileges

Create the flashback archive and grant necessary privileges to the user who will manage FDA.

CREATE FLASHBACK ARCHIVE FDA_NAME TABLESPACE FDA QUOTA 10G RETENTION YEAR; -- FDA_NAME is the name of your flashback archive
GRANT FLASHBACK ARCHIVE ON FDA_NAME TO XYZ; -- Replace 'XYZ' with the username
GRANT FLASHBACK ARCHIVE ADMINISTRATOR TO XYZ;

Step 3: Enable FDA on Respective Table

Enable FDA for a specific table, associating it with the named flashback archive.

ALTER TABLE TEST FLASHBACK ARCHIVE FDA_NAME; -- Replace 'TEST' with your table name and 'FDA_NAME' with your archive name

Step 4: Set Context Level for Transaction Data

To ensure that context information (e.g., SYS_CONTEXT attributes) is stored with the transaction data, use DBMS_FLASHBACK_ARCHIVE.SET_CONTEXT_LEVEL.

BEGIN
  DBMS_FLASHBACK_ARCHIVE.SET_CONTEXT_LEVEL(level=>'ALL');
END;
/

Create a Group and Add a Table (Application Level Management)

Oracle 12c allows you to group tables under an application name for easier management.

-- Register an application for FDA
EXEC DBMS_FLASHBACK_ARCHIVE.REGISTER_APPLICATION(application_name=>'FDA_APP', flashback_archive_name=>'FDA_NAME');

-- Add a table to the registered application
EXEC DBMS_FLASHBACK_ARCHIVE.ADD_TABLE_TO_APPLICATION(application_name=>'FDA_APP', table_name=>'TEST', schema_name=>'XYZ');

Enable/Disable Group

You can enable or disable FDA for an entire application group.

-- Disable FDA for the application group
EXEC DBMS_FLASHBACK_ARCHIVE.DISABLE_APPLICATION(application_name=>'FDA_APP');

-- Enable FDA for the application group
EXEC DBMS_FLASHBACK_ARCHIVE.ENABLE_APPLICATION(application_name=>'FDA_APP');

Friday, May 31, 2019

CDB/PDB - 12.2, 18c, 19c new features

Oracle CDB/PDB Features and Management (12.2, 18c, 19c)

This document outlines key features and management operations for Container Databases (CDBs) and Pluggable Databases (PDBs) in Oracle Database versions 12.2, 18c, and 19c.

1. Creating a Container Database (CDB)

A CDB is the root container that hosts multiple PDBs.

CREATE DATABASE ... ENABLE PLUGGABLE DATABASE;

This command will create CDB$ROOT (the root container) and PDB$SEED (a template for creating new PDBs).

To verify the CDB status:

SELECT NAME, CDB, CON_ID FROM V$DATABASE;

2. Creating an Application Root (12.2, 18c, 19c)

An Application Root acts as a parent container for application PDBs, allowing for centralized management of common application data and metadata.

Connect to the CDB root before creating:

CONNECT TO the_cdb_root_user_as_sysdba; -- Example: CONNECT SYS/password@cdb_name AS SYSDBA;
CREATE PLUGGABLE DATABASE your_app_root_name AS APPLICATION CONTAINER;

3. Creating an Application Seed (12.2, 18c, 19c)

An Application Seed is a template PDB within an Application Root, used for creating new application PDBs.

Ensure the current container is the application root:

ALTER SESSION SET CONTAINER = your_app_root_name;
CREATE PLUGGABLE DATABASE AS SEED;

4. Creating a PDB from Seeds (12.2, 18c, 19c)

You can create new PDBs from PDB$SEED (in CDB$ROOT) or from an Application Seed (in an Application Root).

-- Connect to the CDB root
ALTER SESSION SET CONTAINER=CDB$ROOT;

-- Create a new PDB
CREATE PLUGGABLE DATABASE mypdbnew
  ADMIN USER MY_DBA IDENTIFIED BY "####"
  STORAGE (MAXSIZE UNLIMITED) -- Example storage clause
  DEFAULT TABLESPACE mypdbnew_tbs
  DATAFILE '/path/to/datafiles/mypdbnew01.dbf' SIZE 100M
  FILE_NAME_CONVERT =('/path/to/seed_datafiles/','/path/to/new_pdb_datafiles/'); -- Adjust paths
  -- Example: FILE_NAME_CONVERT =('/u01/app/oracle/oradata/CDB1/pdbseed/','/u01/app/oracle/oradata/CDB1/mypdbnew/');

-- Open the new PDB
ALTER PLUGGABLE DATABASE mypdbnew OPEN;

-- Verify user information in the CDB
SELECT con_id, username, default_tablespace, common FROM cdb_users WHERE con_id = (SELECT con_id FROM v$pdbs WHERE name = 'MYPDBNEW');

5. Moving PDBs / Plugging In a PDB (12.2, 18c, 19c)

This process involves unplugging a PDB from one CDB and plugging it into another, or into the same CDB.

-- Close the PDB to be unplugged
ALTER PLUGGABLE DATABASE mypdb CLOSE IMMEDIATE;

-- Unplug the PDB, creating an XML manifest file
ALTER PLUGGABLE DATABASE mypdb UNPLUG INTO '/backup_location/mypdb.xml';

This command creates mypdb.xml (the manifest file) and leaves the PDB's datafiles in their current location.

  • Move Files: Manually move the PDB's datafiles (e.g., system.dbf, sysaux.dbf, mypdb.dbf) and the mypdb.xml file to the new desired location.

  • Check Compatibility (in the target CDB):

    -- Connect to the target CDB root
    ALTER SESSION SET CONTAINER=CDB$ROOT;
    -- Run the check (no output means compatible, errors indicate issues)
    SELECT DBMS_PDB.CHECK_PLUG_COMPATIBILITY('/path/to/new_location/mypdb.xml') FROM DUAL;
    
  • Create PDB by Plugging In:

    CREATE PLUGGABLE DATABASE mypdb
      USING '/path/to/new_location/mypdb.xml'
      SOURCE_FILE_NAME_CONVERT =('/old/path/','/new/path/') -- Adjust paths as needed
      NOCOPY; -- Use NOCOPY if datafiles are already moved to the target location
    
  • Open the Plugged-In PDB:

    ALTER PLUGGABLE DATABASE mypdb OPEN;
    

    If in a RAC environment, ensure it's opened on all relevant nodes.

6. Cold Clone from Another PDB (12.2, 18c, 19c)

A cold clone creates a new PDB from an existing PDB while the source PDB is closed or in read-only mode.

-- Open source PDB in read-only mode (required for cold clone)
ALTER PLUGGABLE DATABASE mypdb OPEN READ ONLY FORCE;

-- Create the clone PDB
CREATE PLUGGABLE DATABASE mypdb_test
  FROM mypdb
  FILE_NAME_CONVERT =('/path/to/source_datafiles/','/path/to/clone_datafiles/'); -- Adjust paths

-- Open the cloned PDB
ALTER PLUGGABLE DATABASE mypdb_test OPEN;

(Note: For a Hot Clone, the source PDB does not need to be in read-only mode.)

7. Cold Clone from Non-CDB (12.2, 18c, 19c)

You can convert a non-CDB into a PDB and plug it into a CDB.

  1. Generate XML Manifest from Non-CDB:

    • Connect to the non-CDB as SYSDBA.

    EXEC DBMS_PDB.DESCRIBE(pdb_name => 'NONCDB_NAME', xml_file => '/path/to/noncdb.xml');
    
  2. Create PDB in CDB using XML:

    • Move the non-CDB's datafiles and the generated noncdb.xml file to the target CDB's desired location.

    • Connect to the target CDB root.

    CREATE PLUGGABLE DATABASE noncdb_pdb
      USING '/path/to/noncdb.xml'
      SOURCE_FILE_NAME_CONVERT =('/old/noncdb/datafiles/','/new/pdb/datafiles/')
      NOCOPY;
    
  3. Run noncdb_to_pdb.sql: After plugging in, this script must be run inside the new PDB to complete the conversion.

    ALTER PLUGGABLE DATABASE noncdb_pdb OPEN;
    ALTER SESSION SET CONTAINER = noncdb_pdb;
    @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
    

8. Dropping a Pluggable Database (12.2, 18c, 19c)

-- Close the PDB first
ALTER PLUGGABLE DATABASE mypdb CLOSE;

-- Drop the PDB and its associated datafiles
DROP PLUGGABLE DATABASE mypdb INCLUDING DATAFILES;

9. Hot Clone a Remote PDB or Non-CDB (12.2+)

This feature allows cloning a PDB or non-CDB from a remote database over a database link. The source must use local undo mode.

-- Create a database link to the remote source database
CREATE DATABASE LINK db_link CONNECT TO remote_user IDENTIFIED BY remote_user_password USING 'remotedb_service_name';

-- Create the new PDB as a clone from the remote source
CREATE PLUGGABLE DATABASE newpdb FROM remotedb@db_link;

-- Open the new PDB
ALTER PLUGGABLE DATABASE newpdb OPEN;

-- If cloning from a non-CDB, run the conversion script
-- ALTER SESSION SET CONTAINER = newpdb;
-- @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

10. Relocate PDB (12.2+)

Relocating a PDB moves it from one CDB to another while keeping the PDB online during most of the operation.

-- Create a database link to the source CDB
CREATE DATABASE LINK db_link CONNECT TO remote_user IDENTIFIED BY remote_user_password USING 'source_cdb_service_name';

-- Create the PDB in the target CDB and initiate relocation
CREATE PLUGGABLE DATABASE remote_pdb_name FROM remote_pdb_name@db_link RELOCATE;

-- Open the relocated PDB in the target CDB
ALTER PLUGGABLE DATABASE remote_pdb_name OPEN;

11. PDB Archive Files (12.2+)

PDB archive files (.pdb extension) are self-contained archives that include both the PDB's XML manifest and its datafiles. This simplifies PDB transport.

-- Unplug the PDB into an archive file
ALTER PLUGGABLE DATABASE mypdb UNPLUG INTO '/backup_location/mypdb.pdb';
```mypdb.pdb` is an archive containing both the `.xml` file and the datafiles.

```sql
-- Create a PDB from an archive file
CREATE PLUGGABLE DATABASE mypdb_test USING '/backup_location/mypdb.pdb';

12. PDB Refresh (12.2+)

Refreshable PDB clones allow a PDB to be periodically updated from a remote source PDB.

-- Create a database link to the source PDB
CREATE DATABASE LINK db_link CONNECT TO remote_user IDENTIFIED BY remote_user_password USING 'remote_pdb_service_name';

-- Create a refreshable PDB clone in manual refresh mode
CREATE PLUGGABLE DATABASE newpdb FROM remote_pdb_name@db_link REFRESH MODE MANUAL;

-- Open the refreshable PDB (initially read-only)
ALTER PLUGGABLE DATABASE newpdb READ ONLY;

-- To perform a manual refresh:
ALTER PLUGGABLE DATABASE newpdb CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE newpdb REFRESH;
ALTER PLUGGABLE DATABASE newpdb READ ONLY; -- PDB will be read-only after refresh

  • Auto-refresh:

    ALTER PLUGGABLE DATABASE newpdb REFRESH MODE EVERY 120 MINUTES;
    

    This auto-refresh only occurs if the PDB is closed.

13. Proxy PDB (12.2+)

A Proxy PDB acts as a pointer to a PDB in a remote CDB, allowing local access to a remote PDB without actually moving its datafiles.

  • Benefits:

    • Existing client connections unchanged: Clients can connect to the proxy PDB as if it were local.

    • Single entry point for cloud DB: Simplifies access to remote databases, especially in cloud environments.

    • Share an application root container: Enables sharing of an application root's content across multiple containers.

  • Example (CDB 1 instance):

    -- Create a database link to the remote CDB
    CREATE DATABASE LINK db_clone_link CONNECT TO c##remote_clone_user IDENTIFIED BY remote_clone_user_password USING 'CDB2_SERVICE_NAME';
    
    -- Create the proxy PDB
    CREATE PLUGGABLE DATABASE PDB2_PROXY AS PROXY FROM PDB2@db_clone_link;
    
    -- Verify the proxy PDB
    SELECT pdb_name, is_proxy_pdb, status FROM dba_pdbs WHERE pdb_name = 'PDB2_PROXY';
    

    No longer DB link & link user required for subsequent access after creation.

  • Accessing the Proxy PDB:

    ALTER SESSION SET CONTAINER=PDB2_PROXY;
    -- Now you are effectively connected to the remote PDB2 via the proxy.
    -- DDL and DML operations performed here will execute on the remote CDB2 instance.
    

14. Snapshot Carousel (PDB Archives) (18c+)

Snapshot Carousel provides a repository for periodic point-in-time copies (snapshots) of a PDB, enabling easy recovery or cloning to a specific point in time.

-- Create a PDB with snapshot mode enabled
CREATE PLUGGABLE DATABASE pdb_snap
  ADMIN USER MY_DBA IDENTIFIED BY "####"
  SNAPSHOT MODE EVERY 24 HOURS; -- Automatically creates a snapshot every 24 hours

-- Open the PDB
ALTER PLUGGABLE DATABASE pdb_snap OPEN;

-- MAX_PDB_SNAPSHOT can be changed between 0 to 8 (0 will delete existing snapshots)
ALTER PLUGGABLE DATABASE pdb_snap SET MAX_PDB_SNAPSHOTS = 5;

-- Manual snapshot creation
ALTER PLUGGABLE DATABASE pdb_snap SNAPSHOT xyz_snap;

-- Create a new PDB from one of the archives (snapshots)
CREATE PLUGGABLE DATABASE pdb_from_snap
  FROM pdb_snap
  USING SNAPSHOT xyz_snap;

15. Transportable Backups (18c+)

This feature supports using backups performed on a PDB before it is unplugged and plugged into a new container. This significantly streamlines PDB relocation for purposes like load balancing or migration between on-premises and cloud, as it avoids the need for new backups immediately before and after each PDB move.

16. Switchover Refreshable Clone PDB between CDBs (Migration) (18c+)

This allows for a planned or unplanned switchover of a refreshable clone PDB between different CDBs, facilitating PDB migration with minimal downtime.

  • Planned Switchover:

    ALTER PLUGGABLE DATABASE your_pdb_name REFRESH MODE EVERY 2 MINUTES FROM remote_pdb_name@dblink SWITCHOVER;
    

    This command prepares the PDB for a switchover, automatically refreshing it.

  • Unplanned Switchover (after a planned switchover setup):

    ALTER PLUGGABLE DATABASE your_pdb_name REFRESH; -- Perform a final refresh
    ALTER PLUGGABLE DATABASE your_pdb_name REFRESH MODE NONE; -- Disable refresh mode
    ALTER PLUGGABLE DATABASE your_pdb_name OPEN READ WRITE; -- Open the PDB in read-write mode
    

17. Transient No-Standby PDBs (Clone) (18c+)

This feature allows creating a hot clone of a PDB without it being replicated to a standby database, useful for temporary testing or development environments.

  • Hot clone to transient PDB:

    CREATE PLUGGABLE DATABASE transient_pdb FROM source_pdb CLONE STANDBY=NONE;
    
  • Cold clone of this transient PDB with standby: You can then create a cold clone of this transient PDB that does include standby replication.

  • Drop transient PDB:

    DROP PLUGGABLE DATABASE transient_pdb INCLUDING DATAFILES;
    

18. AWR for Pluggable Database (12.2 onwards)

Starting from Oracle 12.2, AWR (Automatic Workload Repository) data can be collected at the PDB level, providing granular performance insights for individual PDBs.

  • Enable AWR Auto Flush for PDBs:

    ALTER SYSTEM SET awr_pdb_autoflush_enabled=TRUE;
    
  • View PDB Snapshots:

    SELECT * FROM awr_pdb_snapshot;

Wednesday, May 29, 2019

Avamar rman backup generates large amount of trace files

Avamar RMAN Backup Trace Files Management

When performing RMAN backups using Avamar, it is common to observe the generation of a large number of trace files. These trace files can consume significant disk space and may not always be necessary for routine operations. This document outlines common methods to manage and reduce the generation of these trace files.

1. Avamar GUI Configuration

The primary method to control trace file generation from the Avamar side is through its graphical user interface (GUI).

  • Action: The Avamar team (or administrator) needs to set the tracing level to 'TRACE 0' within the Avamar GUI for the relevant RMAN backup policies or configurations.

  • Impact: Setting TRACE 0 typically disables detailed tracing, significantly reducing the volume of trace files generated by the Avamar client during RMAN operations.

2. Oracle Database Event Setting

Database administrators (DBAs) can configure an Oracle database event to disable specific kernel-related tracing, which often contributes to the generation of numerous trace files during backup operations.

Permanent Fix (Requires Database Restart)

To make the change persistent across database restarts, set the event in the SPFILE.

  • Command:

    ALTER SYSTEM SET EVENT='trace[krb.*] disk disable, memory disable' SCOPE=SPFILE SID='*';
    
    
  • Explanation:

    • trace[krb.*]: Targets tracing related to the krb (kernel resource broker) component, which is often involved in backup and recovery processes.

    • disk disable: Prevents trace information from being written to disk.

    • memory disable: Prevents trace information from being stored in memory.

    • SCOPE=SPFILE: Ensures the change is written to the server parameter file and will persist after a database restart.

    • SID='*': Applies the change to all instances in a Real Application Clusters (RAC) environment. For a single instance, you can omit SID='*' or specify the instance SID.

  • Effectiveness: This change will take effect only after a full database restart.

Temporary Fix (No Database Restart Required)

For an immediate, temporary reduction in trace file generation without a database restart, the event can be set at the session or system level without SCOPE=SPFILE.

  • Command:

    ALTER SYSTEM SET EVENT='trace[krb.*] disk disable, memory disable';
    
    
  • Explanation:

    • This command applies the event setting to the current running instance(s) immediately.

    • Limitation: This change is not persistent across database restarts. If the database is restarted, this command would need to be re-executed.

By implementing both the Avamar GUI setting and the Oracle database event, you can effectively manage and significantly reduce the large amount of trace files generated during Avamar RMAN backup operations. Both steps are generally recommended for comprehensive trace file management.

Sunday, March 24, 2019

TDE (Transparent Data Encryption)

Implementing Transparent Data Encryption (TDE) on Oracle 12c Standalone Database

This document provides a step-by-step guide to implementing Transparent Data Encryption (TDE) on an Oracle 12c standalone database. TDE helps protect sensitive data at rest by encrypting datafiles.

Prerequisites

  • Oracle Database 12c installed and running.

  • Appropriate OS user (e.g., oracle) with permissions to create directories and modify Oracle configuration files.

  • Familiarity with SQL*Plus and basic OS commands.

Implementation Steps

1. Create Wallet Directory

Create a dedicated directory on the file system to store the TDE wallet (keystore). This directory should have restricted permissions.

mkdir -p /test/WALLET

Note: Ensure the oracle OS user has appropriate read and write permissions to this directory.

2. Modify sqlnet.ora

Update the sqlnet.ora file to specify the location of the encryption wallet. This file is typically located in $ORACLE_HOME/network/admin/.

Add the following lines to sqlnet.ora:

ENCRYPTION_WALLET_LOCATION =
  (SOURCE = (METHOD = FILE)(METHOD_DATA =
    (DIRECTORY = /test/WALLET)))

Important: Ensure this sqlnet.ora file is in the correct network/admin directory accessible by your database instance. If you have a separate Grid Infrastructure Home (Grid_Home), you might need to copy sqlnet.ora from the Grid Home's network/admin to the database home's network/admin if it's managed externally, or ensure the path is consistent.

3. Create Keystore (Wallet)

In Oracle 12c, the standard and recommended way to create a TDE keystore is using the ADMINISTER KEY MANAGEMENT command from SQL*Plus.

Connect to SQL*Plus as SYSDBA:

ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/test/WALLET/' IDENTIFIED BY "YOUR_WALLET_PASSWORD";

Replace "YOUR_WALLET_PASSWORD" with a strong password for your TDE wallet. Remember this password, as it's needed to open the wallet manually.

Verify File Creation: After executing the command, check the wallet directory. You should see an ewallet.p12 (or ewallet.p01 in older versions/configurations) file created.

ls -ltr /test/WALLET/

Example Output:

total 4
-rw------- 1 oracle asmadmin 2555 May 27 10:30 ewallet.p12

4. Open Keystore

Before you can activate keys or encrypt data, the keystore must be opened.

ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "YOUR_WALLET_PASSWORD";

Note: If you copied sqlnet.ora from a Grid Home, ensure it's in the correct database network/admin path before attempting to open the keystore.

5. Activate the Master Encryption Key

Once the keystore is open, you can activate the master encryption key. This command generates a new master key and sets it as the active key for TDE operations.

First, check current key status (should show no keys initially):

SET LINESIZE 100
SELECT con_id, key_id FROM v$encryption_keys;

Example Output:

no rows selected

Now, activate the key:

ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY "YOUR_WALLET_PASSWORD" WITH BACKUP;

Example Output:

keystore altered.
```WITH BACKUP` option creates a backup of the current keystore before generating a new key, which is good practice.

Verify the new master key:
```sql
SELECT con_id, key_id FROM v$encryption_keys;

Example Output:

  CON_ID KEY_ID
---------- ------------------------------------------------------------------------------
         0 TTTTTTTTTTTTTTTTGfeeeeeeeeeeeeeeee

The KEY_ID will be a unique identifier for your master encryption key.

6. Enable Autologin for the Keystore (Recommended)

By default, the wallet is password-protected. This means you would need to manually open the wallet with the password every time the database restarts. To avoid this, enable autologin.

First, check the current wallet status:

column WRL_PARAMETER format a30
column WRL_TYPE format a10
column STATUS format a10

SELECT * FROM v$encryption_wallet;

Example Output (before autologin):

WRL_TYPE   WRL_PARAMETER               STATUS     WALLET_TYPE          WALLET_OR FULLY_BAC     CON_ID
---------- ------------------------------ ---------- -------------------- --------- --------- ----------
FILE       /test/WALLET/                  OPEN       PASSWORD             SINGLE    NO               0

Notice WALLET_TYPE is PASSWORD.

Now, enable autologin:

ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/test/WALLET/' IDENTIFIED BY "YOUR_WALLET_PASSWORD";

This command creates a cwallet.sso file in the wallet directory.

Restart the database to confirm the autologin functionality.

After restart, check the wallet status again:

ls -ltr /test/WALLET/

Example Output:

total 8
-rw------- 1 oracle asmadmin 2555 May 27 10:30 ewallet.p12
-rw------- 1 oracle asmadmin 2580 May 27 10:35 cwallet.sso

Now, query v$encryption_wallet:

SELECT * FROM v$encryption_wallet;

Example Output (after autologin):

WRL_TYPE   WRL_PARAMETER               STATUS     WALLET_TYPE          WALLET_OR FULLY_BAC     CON_ID
---------- ------------------------------ ---------- -------------------- --------- --------- ----------
FILE       /test/WALLET/                  OPEN       AUTOLOGIN            SINGLE    NO               0

The WALLET_TYPE should now be AUTOLOGIN.

7. Create Encrypted Tablespace

Now that TDE is configured and the master key is active, you can create encrypted tablespaces. Any data stored in these tablespaces will be automatically encrypted by TDE.

CREATE TABLESPACE TDE_TESTDATA
  DATAFILE '+DATA_DG' SIZE 200M AUTOEXTEND ON NEXT 100M MAXSIZE 8192M
  LOGGING
  ONLINE
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M
  BLOCKSIZE 8K
  SEGMENT SPACE MANAGEMENT AUTO
  ENCRYPTION USING 'AES256' -- Specify the encryption algorithm
  DEFAULT STORAGE(ENCRYPT)  -- Data in this tablespace will be encrypted by default
  FLASHBACK ON;

Replace +DATA_DG with your actual ASM disk group or file system path.

Once the encrypted tablespace is created, you can create tables or move existing tables and indexes into it as usual. Oracle will handle the encryption and decryption transparently.

Thursday, February 28, 2019

All you want to know about CDBs , PDBs and multitenant databases

Oracle Multitenant Architecture: CDBs, PDBs, and Key Features

Oracle's Multitenant Architecture, introduced in Oracle Database 12c, fundamentally changes how databases are managed by allowing multiple Pluggable Databases (PDBs) to reside within a single Container Database (CDB). This architecture provides significant benefits for consolidation, agility, and resource management.

What are CDBs and PDBs?

  • Container Database (CDB): From a Database Administrator's (DBA) point of view, the CDB is the single, overarching database that contains all the PDBs. It includes common Oracle metadata, background processes, and shared memory (SGA).

  • Pluggable Database (PDB): From an application's point of view, a PDB is an independent, fully functional database. Applications connect to a PDB as if it were a traditional non-CDB. Many PDBs can be plugged into a single Multitenant CDB.

Why Use Multitenant Databases?

The multitenant architecture offers numerous advantages:

  • Rapid Provisioning (Via Clones): New PDBs can be quickly provisioned by cloning existing PDBs or the PDB$SEED, drastically reducing deployment time.

  • Online PDB Relocate: PDBs can be relocated between CDBs (or within the same CDB) while remaining online, minimizing downtime.

  • Sharing Background Processes: All PDBs within a CDB share the same set of background processes (e.g., PMON, SMON, DBWn), reducing the overall process footprint.

  • Sharing SGA: PDBs share the same System Global Area (SGA) of the CDB, leading to more efficient memory utilization.

  • Minimize CAPEX and OPEX: By consolidating multiple databases onto a single CDB, hardware resource requirements (CAPEX) and operational overhead (OPEX) are significantly reduced.

  • Single Backup for CDB: A single backup operation for the CDB protects all contained PDBs, simplifying backup strategies.

  • Automatically Standby: All PDBs within a CDB are automatically protected by a single Data Guard standby database configured for the CDB.

  • Perform Rapid Upgrades: Upgrades are performed at the CDB level, meaning all PDBs are upgraded simultaneously, streamlining the upgrade process.

  • Reference Data in a Different CDB: PDBs can access data in other PDBs or even other CDBs using database links, facilitating data sharing.

  • Isolate Grants within PDBs: Security grants and user management can be isolated within individual PDBs, enhancing security and simplifying administration.

  • Refreshable Clone: A read-only clone of a PDB that can periodically synchronize with its source PDB, useful for reporting or testing environments.

  • Snapshot Copy PDB: A point-in-time copy of a PDB. Note that this PDB cannot be unplugged from the CDB root or application root.

Multitenant Database Structure

  • The System Container: This includes the CDB$ROOT (the root container) and all PDBs directly plugged into the CDB$ROOT.

  • CDB$ROOT: This is the mandatory root container of a CDB. It stores Oracle system metadata only and is exactly one per CDB.

  • PDB$SEED: This is a system-supplied template PDB within CDB$ROOT. It cannot be dropped and serves as the source for creating new PDBs.

  • Application Container: An application container consists of exactly one application root and the application PDBs plugged into this root. It allows for managing a set of related PDBs as a single application.

  • Application Root: The parent container for application PDBs within an application container.

  • Application PDB: If a PDB belongs to an application container, then it is an application PDB.

  • Application Seed: An optional application PDB within an application root, serving as a template for creating new application PDBs.

Administration Roles

Different administrative roles exist to manage the various containers:

  • CDB Administrator: Manages the entire CDB, including CDB$ROOT and all PDBs.

  • Application Container Administrator: Manages a specific application container, including its application root and application PDBs.

  • Application PDB Administrator: Manages PDBs within a specific application container.

  • PDB Administrator: Manages PDBs that are not part of an application container (i.e., directly plugged into CDB$ROOT).

Key Concepts

  • SYS User: SYS is a common user in the CDB. Every PDB is conceptually "owned" by SYS in that it's the superuser within each PDB.

  • Cross-PDB Access: By default, a user connected to one PDB must use database links to access objects in a different PDB.

  • PDB Lifecycle:

    • To open a PDB, you must start its respective service.

    • To stop a PDB: ALTER PLUGGABLE DATABASE PDB_NAME CLOSE IMMEDIATE;

  • PDB$SEED Contents: Contains standard Oracle schemas and objects, including SYSTEM, SYSAUX, TEMP, and UNDO tablespaces, serving as a clean base for new PDBs.

Useful Commands

  • Show current container name:

    show conn_name
    
  • List all containers (CDBs and PDBs):

    SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;
    
  • View common users in the CDB:

    SELECT con_id, username, default_tablespace, common FROM cdb_users;
    
  • List all PDBs:

    SELECT PDB_NAME FROM DBA_PDBS;
    

How Hot Clones Work

When performing a hot clone of a PDB (where the source PDB remains open and available), Oracle ensures data consistency by:

  • Applying Redo: The cloning process applies redo logs from the source PDB to catch up the cloned PDB to the point-in-time when the clone operation started.

  • Applying Undo: It then applies undo information to roll back any uncommitted transactions that were active on the source PDB at the time of cloning, ensuring the cloned PDB is transactionally consistent.

PDB/CDB New Features (12.2, 18c, 19c)

Oracle continuously enhances the multitenant architecture with each release:

Oracle Database 12.2 Enhancements

  • Hot Clone a Remote PDB or Non-CDB: Allows cloning a PDB or non-CDB from a remote database over a database link while the source is online. The source must use local undo mode.

  • Relocate PDB: Enables moving a PDB from one CDB to another while keeping the PDB online during most of the operation.

  • PDB Archive Files (.pdb): Introduces self-contained .pdb archive files that bundle both the PDB's XML manifest and its datafiles, simplifying PDB transport.

  • PDB Refresh: Allows creating a refreshable PDB clone that can be periodically updated from a remote source PDB, with options for manual or auto-refresh.

  • Proxy PDB: Creates a logical pointer to a PDB in a remote CDB, allowing local access to a remote PDB without actually moving its data. Benefits include unchanged client connections, a single entry point, and sharing an application root.

  • AWR for Pluggable Database: Enables granular AWR data collection at the PDB level, providing performance insights for individual PDBs. (ALTER SYSTEM SET awr_pdb_autoflush_enabled=TRUE;)

Oracle Database 18c Enhancements

  • Snapshot Carousel (PDB Archives): A repository for periodic point-in-time copies of a PDB, enabling easy recovery or cloning to a specific point in time. PDBs can be created with SNAPSHOT MODE EVERY X HOURS or manually with ALTER PLUGGABLE DATABASE SNAPSHOT.

  • Transportable Backups: Supports using backups performed on a PDB prior to it being unplugged and plugged into a new container, facilitating agile PDB relocation without requiring immediate pre/post-move backups.

  • Switchover Refreshable Clone PDB between CDBs: Allows for planned or unplanned switchover of a refreshable clone PDB between different CDBs, enabling PDB migration with minimal downtime.

  • Transient No-Standby PDBs (Clone): Allows creating a hot clone of a PDB without it being replicated to a standby database, useful for temporary testing or development.

Oracle Database 19c Enhancements

  • Real-time Statistics (Exadata Only): Extends online statistics gathering to include conventional DML statements, collecting statistics "on-the-fly" during INSERT, UPDATE, and MERGE operations. (Requires Exadata).

  • High-Frequency Automatic Optimizer Statistics: Allows for more frequent, granular collection of optimizer statistics, improving the accuracy of execution plans.

  • Validate SPFILE Parameters: New command to validate SPFILE parameters, useful for consistency checks, especially in Data Guard environments.

  • Schema Only Accounts: While existing from 9i, their use is increasingly emphasized for security best practices in multitenant environments, restricting direct login to schema owners.

  • Automatic Replication of Restore Points: Restore points created on the primary database are automatically replicated to the standby database via the redo stream.

  • DML Operations on Active Data Guard Standby Databases: Enhances DML redirection, allowing DML to be performed on a read-only standby by transparently redirecting it to the primary (requires adg_redirect_dml=TRUE and username/password connection).

  • Inline External Table (EXTERNAL clause): Allows defining external tables directly within a SQL query, eliminating the need for separate DDL statements ("Zero DDL").

This comprehensive overview should provide a solid understanding of Oracle's Multitenant Architecture and its evolution across recent database releases.


PDB/CDB new features (12.2, 18c, 19c) 





Wednesday, February 27, 2019

Oracle architecture Diagram [ Multitenant and Single tenant]


Oracle 12C :

Single tenant Architecture Diagram :
https://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/poster/OUTPUT_poster/pdf/Database%20Architecture.pdf



Multitenant Architecture Diagram :
https://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/poster/OUTPUT_poster/pdf/Multitenant%20Architecture.pdf

Friday, January 4, 2019

Recreate Lob table or big table in PARALLEL

Recreating LOB/Big Tables in Parallel

This section provides steps to recreate a large table, potentially containing LOBs, in parallel for improved performance. This is useful for reorganizing data, applying new storage attributes, or simply moving data efficiently.

Source Table: APP_USER.APP_TABLE1 Target Table: APP_USER.APP_TABLE2

Step 1: Create a Sequence (for Logging Timing)

Create a sequence to generate unique IDs for logging the start and end times of the operation.

CREATE SEQUENCE Myuser.T_SQ
  START WITH 1
  INCREMENT BY 1
  NOCACHE;

-- Create a table to log the job times (if it doesn't exist)
CREATE TABLE Myuser.job_time (
    t_id    NUMBER,
    t_name  VARCHAR2(100),
    t_type  VARCHAR2(10),
    t_time  DATE
);

Step 2: Create a Procedure

Create a PL/SQL procedure that performs the parallel insert operation and logs its timing.

CREATE OR REPLACE PROCEDURE Myuser.my_proc AS
  v_seq NUMBER;
BEGIN
  -- Enable parallel processing for the current session
  EXECUTE IMMEDIATE 'ALTER SESSION SET PARALLEL_FORCE_LOCAL=TRUE';
  EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';

  -- Get the next sequence value for logging
  SELECT Myuser.T_SQ.NEXTVAL INTO v_seq FROM dual;

  -- Log the start time of the operation
  INSERT INTO Myuser.job_time (t_id, t_name, t_type, t_time) VALUES (v_seq, 'APP_TABLE1', 'START', SYSDATE);
  COMMIT;

  -- Perform the parallel insert from source to target table
  -- APPEND hint for direct path insert, PARALLEL hint for parallel execution
  INSERT /*+ APPEND PARALLEL(A,60) */ INTO APP_USER.APP_TABLE2 A
  SELECT /*+ PARALLEL(B,60) */ * FROM APP_USER.APP_TABLE1 B;
  COMMIT;

  -- Log the end time of the operation
  INSERT INTO Myuser.job_time (t_id, t_name, t_type, t_time) VALUES (v_seq, 'APP_TABLE1', 'END', SYSDATE);
  COMMIT;

EXCEPTION
  WHEN OTHERS THEN
    -- Log any errors
    INSERT INTO Myuser.job_time (t_id, t_name, t_type, t_time) VALUES (v_seq, 'APP_TABLE1', 'ERROR', SYSDATE);
    COMMIT;
    RAISE; -- Re-raise the exception after logging
END;
/

Note:

  • PARALLEL(A,60) and PARALLEL(B,60) hints suggest using 60 parallel slaves. Adjust this number based on your system's CPU cores and I/O capabilities.

  • APPEND hint performs a direct-path insert, which is faster for large data volumes as it bypasses the buffer cache.

  • Ensure that APP_USER.APP_TABLE2 is already created with the desired structure (including LOB segments if APP_TABLE1 has them) and any necessary indexes or constraints are handled separately.

Step 3: Execute Procedure

Execute the procedure to start the parallel table recreation process.

EXEC Myuser.my_proc;

After execution, you can query Myuser.job_time to check the start and end times of the operation.

This comprehensive overview should provide a solid understanding of Oracle's Multitenant Architecture and its evolution across recent database releases.

Tuesday, January 1, 2019

Good motivational books to read 2025

  • "The Monk Who Sold His Ferrari" by Robin Sharma: A classic allegorical tale offering profound lessons on living a more fulfilling life.
  • "No Excuses: The Power of Self-Discipline" by Brian Tracy: A practical guide to developing self-discipline in various aspects of life to achieve greater success.
  • "Atomic Habits" by James Clear: Focuses on how tiny changes can lead to remarkable results. It's practical and actionable for building good habits and breaking bad ones.
  • "The 7 Habits of Highly Effective People" by Stephen Covey: A timeless classic that provides a holistic, integrated, principle-centered approach for solving personal and professional problems.
  • "Grit: The Power of Passion and Perseverance" by Angela Duckworth: Explores why talent isn't the only factor for success, highlighting the importance of passion and long-term perseverance.
  • "Mindset: The New Psychology of Success" by Carol S. Dweck: Introduces the concepts of fixed and growth mindsets and how they impact our ability to learn and grow.
  • "Can't Hurt Me: Master Your Mind and Defy the Odds" by David Goggins: An intense and inspiring memoir about overcoming incredible adversity through extreme mental toughness.
  • "The Power of Habit" by Charles Duhigg: Delves into the science behind habit formation in individuals, organizations, and societies.
  • "Drive: The Surprising Truth About What Motivates Us" by Daniel H. Pink: Challenges traditional ideas of motivation and explores the power of autonomy, mastery, and purpose.