Wednesday, January 1, 2025

mailx vs. s-nail

mailx vs. s-nail: A Comparison

mailx and s-nail are command-line utilities used for sending and receiving emails on Unix-like operating systems. While they serve the same fundamental purpose, s-nail is often considered a modern, enhanced successor to the traditional mailx.

mailx

mailx is a traditional command-line mail utility that has been around for a long time. It's part of the POSIX standard and is widely available on most Unix-like systems. Note: mailx is deprecated in Red Hat Enterprise Linux (RHEL) 9.

Key Characteristics of mailx:

  • Traditional: It provides basic functionality for composing, sending, and reading emails from the command line.

  • Simplicity: Its syntax and features are relatively straightforward, making it easy for basic email tasks.

  • Standard: As a POSIX standard utility, its behavior is generally consistent across different systems.

  • Limited Features: Compared to more modern mail clients, mailx has limited features for handling complex email scenarios, advanced authentication, or modern protocols.

  • Security: Older versions might have limitations regarding modern encryption standards or authentication methods.

Common Use Cases for mailx:

  • Sending simple notifications or alerts from scripts.

  • Basic email composition and sending from the command line.

  • Reading local mailboxes.

Usage Examples for mailx:

  • Sending a simple email:

    echo "This is the body of the email." | mailx -s "Subject of the Email" recipient@example.com
    
  • Reading local mailbox:

    mailx
    

s-nail

s-nail is a modern, feature-rich mail client that aims to be a compatible replacement for mailx while offering significant enhancements. It is often found as the default mailx implementation on newer Linux distributions (e.g., Debian, Ubuntu, RHEL/CentOS 8+).

Key Characteristics of s-nail:

  • Enhanced Functionality: s-nail extends the capabilities of mailx with features like:

    • Improved Protocol Support: Better support for SMTP, IMAP, POP3, and SMTPS/IMAPS/POP3S (secure versions).

    • Advanced Authentication: Support for various authentication mechanisms (e.g., PLAIN, LOGIN, CRAM-MD5, NTLM).

    • TLS/SSL Support: Robust handling of encrypted connections for secure email transmission.

    • MIME Support: Better handling of MIME types for attachments and rich text.

    • Internationalization: Improved handling of character sets and encodings.

    • Configuration: More flexible and powerful configuration options via ~/.s-nailrc or system-wide files.

  • mailx Compatibility: s-nail strives to be largely command-line compatible with mailx, meaning scripts written for mailx often work seamlessly with s-nail.

  • Security Focus: Designed with modern security considerations in mind, making it a more secure choice for sensitive email operations.

  • Active Development: It benefits from ongoing development and improvements.

Common Use Cases for s-nail:

  • Sending emails from scripts with advanced requirements (e.g., secure connections, specific authentication).

  • Acting as a robust command-line email client for users who prefer a terminal-based interface.

  • Automated reporting and alerting in modern environments where security and protocol support are critical.

Usage Examples for s-nail:

  • Sending an email with an attachment (using a common mailx compatible syntax):

    echo "Please find the report attached." | s-nail -s "Daily Report" -a /path/to/report.pdf recipient@example.com
    
  • Sending an email via a specific SMTP server with authentication:

    s-nail -v -S smtp-use-starttls -S smtp=smtp.example.com:587 \
           -S smtp-auth=login -S smtp-auth-user=your_username \
           -S smtp-auth-password=your_password \
           -s "Secure Email Test" recipient@example.com <<EOF
    This is a secure email sent via s-nail.
    EOF
    

Comparison Summary

Feature

mailx (Traditional)

s-nail (Modern)

Protocol Support

Basic SMTP, local mailboxes

SMTP, SMTPS, IMAP, IMAPS, POP3, POP3S, local mailboxes

Authentication

Limited

Extensive (PLAIN, LOGIN, CRAM-MD5, NTLM, etc.)

Encryption (TLS/SSL)

Often limited or external configuration

Built-in and robust TLS/SSL support

MIME Support

Basic, often requires external tools for complex attachments

Improved, better handling of various MIME types and attachments

Configuration

Simpler, often via command-line options or basic ~/.mailrc

More powerful, extensive options via ~/.s-nailrc and system-wide files

Development Status

Mature, less active development

Actively developed and maintained

Default on Systems

Older Unix/Linux systems (e.g., CentOS 7, older BSDs)

Newer Linux distributions (e.g., Debian, Ubuntu, RHEL/CentOS 8+, often aliased as mailx)

Security

May have limitations with modern security requirements

Designed with modern security in mind

Conclusion

While mailx remains a functional tool for basic email tasks, s-nail is the preferred choice for modern environments due to its expanded feature set, improved protocol support, and enhanced security capabilities. On many contemporary Linux distributions, the mailx command itself often points to the s-nail binary, providing a seamless upgrade path for users and scripts.

Monday, July 12, 2021

ORA-01017: invalid username/password; logon denied while creating DB using DBCA

Problem: DBCA ORA-01017 due to OS user/group permissions.

Verify:

  1. oracle User: Ensure you're running DBCA as the oracle OS user (or grid for Grid Infrastructure).
  2. oinstall Group: Check if oracle (and grid) are in the oinstall group.
    • Command: getent group oinstall | egrep -i 'oracle|grid'
    • Fix (if missing, as root): usermod -a -G oinstall oracle (repeat for grid)
    • Action: Log out and back in as oracle (or grid) after changes.
  3. Other Groups: Confirm oracle is in dba and other relevant groups (e.g., asmdba for ASM).
    • Command: id -Gn oracle

Summary: Ensure oracle (or grid) OS user is correctly part of oinstall and other necessary Oracle groups.

Sunday, October 11, 2020

Application Continuity for the Oracle Database

Draining and Rebalancing Sessions for Planned Maintenance

For planned database maintenance, effectively managing active sessions is crucial to minimize disruption. Oracle provides several mechanisms to drain sessions and rebalance workloads, especially when integrated with connection pools and mid-tiers.

Session Draining

Session draining is a technique used to gracefully terminate sessions, allowing ongoing work to complete before the underlying resource (like a PDB or instance) is taken offline.

  • Fast Application Notification (FAN): Widely used with Oracle connection pools and mid-tiers, FAN enables the database to notify applications about upcoming changes (e.g., a node going down). This allows applications to drain their connections from the affected instance.
  • Database-Initiated Draining (Oracle Database 18c+): Starting with Oracle Database 18c, the database automatically drains sessions when PDBs or instances are stopped or relocated. This simplifies the draining process for administrators.

Failover Solutions

When sessions cannot be drained within the allocated time, failover solutions act as a fallback to ensure application continuity.

Transparent Application Failover (TAF)

TAF provides basic failover capabilities by creating a new session if the primary connection is lost.

  • SELECT Mode: When configured with SELECT mode, TAF can replay queries from the point of failure, allowing read-only operations to resume seamlessly on a new connection.
  • FAILOVER_RESTORE (12.2.0.1): This setting (Level1 by default, meaning None) automatically restores the initial common state before replaying a request.
  • FAILOVER_TYPE=AUTO (18c+): From Oracle Database 18c, setting FAILOVER_TYPE=AUTO automatically configures FAILOVER_RESTORE=AUTO, which is equivalent to Level1 behavior, providing more automated session restoration.

Application Continuity (AC)

Application Continuity is a more advanced failover solution designed to hide outages from applications, ensuring ongoing user experience.

  • Coverage: AC was introduced in Oracle Database 12.1 for thin Java applications and extended to OCI and ODP.NET based applications in Oracle Database 12.2.0.1.
  • Session Recovery: AC goes beyond TAF by rebuilding the entire session, including both session states (e.g., NLS settings, temporary tables) and transactional states, from a known good point after a recoverable outage.

Transparent Application Continuity (TAC)

Introduced in Oracle Database 18c, Transparent Application Continuity further enhances AC.

  • Automated State Tracking: TAC transparently tracks and records the session and transactional state. This automation allows the database session to be recovered following a recoverable outage without requiring explicit application code changes for state management.

https://www.oracle.com/technetwork/database/options/clustering/applicationcontinuity/applicationcontinuityformaa-6348196.pdf

Monday, August 31, 2020

All about oracle-database-preinstall-19c (Oracle Linux 8, 7)

Installing oracle-database-preinstall-19c RPM

This document outlines the process for downloading and installing the oracle-database-preinstall-19c RPM package, and explains its effects on your Oracle Linux system.

1. How to Download

The oracle-database-preinstall-19c RPM is available directly from Oracle's YUM repositories. Choose the correct link based on your Oracle Linux version:

  • For Oracle Linux 8 (OEL8): https://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-1.el8.x86_64.rpm

  • For Oracle Linux 7 (OEL7 - Latest): https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-2.el7.x86_64.rpm

  • For Oracle Linux 7 (OEL7 - Older Version): https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm

To download the RPM: You can use wget or curl directly from your Linux server. For example, for OEL8:

Bash
wget https://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-1.el8.x86_64.rpm

Note: It's generally recommended to use yum install oracle-database-preinstall-19c (or dnf install on OL8) as it will automatically download the correct version and handle dependencies.

2. How to Install

Once downloaded, you can install the RPM package using the yum (or dnf on OL8) command as root.

Command:

Bash
sudo yum install oracle-database-preinstall-19c-1.0-1.el8.x86_64.rpm
# Replace the RPM filename with the one you downloaded for your specific OS version.

Using yum directly (recommended for dependency resolution):

Bash
sudo yum install oracle-database-preinstall-19c
# Or on OL8:
sudo dnf install oracle-database-preinstall-19c

To verify installation:

Bash
rpm -qa | grep oracle-database-preinstall-19c

(Note: The command rpm -qa oracle-database-preinstall-19c-1.0-1.el8.x86_64.rpm you provided would only work if the full filename matches exactly and is not the standard way to query all installed packages containing a string.)

3. What it Will Do

The oracle-database-preinstall-19c RPM automates many of the prerequisite configurations required for Oracle Database 19c installation:

  • Create an oracle user: A dedicated operating system user named oracle is created for database administration.
  • Create groups: Essential OS groups such as oinstall, dba, backupdba, dgdba, kmdba, and racdba are created.
  • Update kernel parameters: System kernel parameters (e.g., shmmax, shmall, semaphores, file-max, ip_local_port_range) are adjusted to recommended values for Oracle Database.
  • Install dependencies: Installs various dependent packages required by Oracle Database.

4. What Happens if Existing User/Group Exists?

The preinstall RPM is designed to handle existing configurations gracefully:

  • Existing oracle user: If an oracle user already exists, the RPM will ignore its creation. It will not modify the existing user's home directory, UID, or primary group unless specifically necessary for group membership.
  • Existing groups: If any of the required groups (e.g., oinstall, dba) already exist, the RPM will add the oracle user to these groups if they are not already members. It will not recreate or modify the GIDs of existing groups. New required groups that don't exist will be created.
  • Kernel parameters: The RPM will replace existing kernel parameters (typically found in /etc/sysctl.conf or a file under /etc/sysctl.d/) with the recommended values for Oracle Database 19c. It's crucial to review these changes if you have custom kernel parameter settings.

Thursday, August 27, 2020

Send email with multiple attachment using powershell

# This script will take an input value from the user and send an email with attachments
# if the attachment filenames match names derived from the input value.

# Load SMO extension (This line might be vestigial if not interacting with SQL Server Management Objects,
# but it's kept as per the original request. It's typically used for SQL Server administration tasks.)
[void][Reflection.Assembly]::LoadWithPartialName('Microsoft.VisualBasic')

# --- Get User Input ---
# Define the title for the input box
$title = 'Email My Scripts'
# Define the message prompt for the user
$msg = 'Enter your INPUT :'
# Display an input box to the user and store their response
$INPUTVALUEvers = [Microsoft.VisualBasic.Interaction]::InputBox($msg, $title)
# Replace any backslashes in the input value with underscores for use in filenames
$INPUTVALUE = $INPUTVALUEvers -replace '\\','_'

# --- Define Email Variables ---
# Sender email address
$fromaddress = "xyz@gmail.com"
# Recipient email address
$toaddress = "xyz@gmail.com"
# Email subject line, incorporating the user's original input value
$Subject = "My report $INPUTVALUEvers"
# Email body content
$body = "My reports"
# SMTP server address for sending emails
$smtpserver = "my.com.au"

# Display the processed input value (with backslashes replaced) to the console
write-host $INPUTVALUE

# --- Prepare Attachments ---
# Define the path where the attachment files are located
$attachmentPath = "\\My_path\My\"

# Construct the filenames for the attachments based on the processed input value
$attachment1 = "MyReport_$INPUTVALUE.html"
$attachment2 = "Mylogon_$INPUTVALUE.txt"

# Get the full path to the attachment files.
# Get-ChildItem is used to find the files in the specified path that match the include patterns.
# The result is cast to an array to ensure it's treated as a collection, even if only one file is found.
[array]$attachments = Get-ChildItem -Path $attachmentPath -Include $attachment1, $attachment2

# --- Send Email ---
# Create a hashtable of parameters for the Send-MailMessage cmdlet.
# This makes the command more readable and manageable.
$Msg = @{
    To = $toaddress
    From = $fromaddress
    Body = $body
    Subject = $Subject
    SmtpServer = $smtpserver
    BodyAsHtml = $True # Indicates that the email body contains HTML content
    Attachments = $attachments.FullName # Pass the full paths of the found attachments
}

# Send the email using the defined parameters.
# The '@Msg' syntax is called splatting, which passes the hashtable keys as parameters to the cmdlet.
Send-MailMessage @Msg

Monday, August 24, 2020

Cloud backup - oracle - Dell - DDVE

Dell Data Protection Solutions for Database Workloads

This document provides a brief overview of key Dell Data Protection components relevant to database workloads and data management.

Components Overview

  • Dell Data Domain Virtual Edition (DDVE):

    • Primarily designed for Database (DB) workloads. It provides software-defined data protection with deduplication.

  • Dell Cloud Snapshot Manager (CSM):

    • Focuses on Application (App) workloads. CSM helps manage snapshots for applications, particularly in cloud environments.

  • Data Domain Management Center (DDMC):

    • Used to manage multiple DDVE instances from a centralized console, simplifying administration of a Data Domain estate.

  • PowerProtect Management Center (PPDM):

    • A comprehensive solution offering capabilities for data protection, backup scheduling, deduplication, operational agility, self-service, and IT governance. PPDM provides a unified platform for managing enterprise data protection.

Key Considerations for Database Backups

  • app-optimized-compression: This feature is associated with oracle1 (likely referring to a specific Oracle integration or policy) to optimize compression for database backups.

  • "File per set" configuration: For optimal performance, "File per set" should not be specified during database backup configurations, as it can introduce performance bottlenecks.

Tuesday, July 28, 2020

All about Logical Volume Management (LVM) (PV, VG,LV)

Logical Volume Management (LVM) Explained

Logical Volume Management (LVM) introduces a flexible layer of abstraction over physical storage devices in Linux. This abstraction allows for dynamic management of storage volumes, making it easier to resize, move, and manage disk space without necessarily stopping applications or unmounting file systems. LVM hides the underlying physical disk sizes from the software, providing greater agility in storage administration.

LVM Components

LVM is built upon three core components:

  • Physical Volume (PV):

    • A PV is a raw block device, which can be an entire hard disk (e.g., /dev/sdb) or a partition of a disk (e.g., /dev/sda2).

    • PVs are the foundational building blocks of LVM.

  • Volume Group (VG):

    • A VG is a collection of one or more Physical Volumes (PVs).

    • It acts as a container for Logical Volumes, pooling the storage capacity of its constituent PVs.

  • Logical Volume (LV):

    • An LV represents a portion of a Volume Group (VG).

    • It is the equivalent of a traditional disk partition but with the added flexibility of LVM.

    • A Logical Volume can only belong to one Volume Group.

    • It is on a Logical Volume that you create a file system (e.g., ext4, XFS) and mount it for use by the operating system and applications.

LVM Commands and Examples

Here are common LVM commands and their output examples, demonstrating how to inspect your LVM setup:

Physical Volumes (PVs)

  • pvs (or pvdisplay, pvscan): Displays information about Physical Volumes.

    PV          VG                 Fmt  Attr PSize    PFree
    /dev/sda2   osvg               lvm2 a--  <59.00g      0
    /dev/sdb    oracle_application lvm2 a--  <330.00g <25.00g
    
    • /dev/sda2 is a PV belonging to the osvg Volume Group, with no free space.

    • /dev/sdb is a PV belonging to the oracle_application Volume Group, with 25.00GB of free space.

Volume Groups (VGs)

  • vgs (or vgdisplay, vgscan): Displays information about Volume Groups.

    VG                 #PV #LV #SN Attr   VSize    VFree
    oracle_application   1   4   0 wz--n- <330.00g <25.00g
    osvg                 1   5   0 wz--n-  <59.00g      0
    
    • oracle_application VG has 1 PV, 4 LVs, a total size of <330.00GB, and <25.00GB free.

    • osvg VG has 1 PV, 5 LVs, a total size of <59.00GB, and 0GB free.

Logical Volumes (LVs)

  • lvs (or lvdisplay, lvscan): Displays information about Logical Volumes.

    LV          VG                 Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
    app         oracle_application -wi-ao----   5.00g
    gridinfra   oracle_application -wi-ao---- 100.00g
    oracle      oracle_application -wi-ao---- 100.00g
    orasoftware oracle_application -wi-ao---- 100.00g
    homelv      osvg               -wi-ao----   4.00g
    rootlv      osvg               -wi-ao----  19.00g
    swaplv      osvg               -wi-ao---- <16.00g
    tmplv       osvg               -wi-ao----  12.00g
    varlv       osvg               -wi-ao----   8.00g
    
    • This output shows various Logical Volumes, their corresponding Volume Groups, and their allocated sizes.

Basic LVM Creation Commands

Here are the fundamental commands for creating LVM components:

  • Create a Physical Volume (PV):

    pvcreate /dev/sdb
    # To create multiple PVs:
    pvcreate /dev/sdb /dev/sdc
    
  • Create a Volume Group (VG):

    vgcreate oracle_application /dev/sdb
    # To create a VG from multiple PVs:
    vgcreate oracle_application /dev/sdb /dev/sdc
    
  • Create a Logical Volume (LV):

    lvcreate -L 5G -n app oracle_application
    # This creates a 5GB Logical Volume named 'app' within the 'oracle_application' Volume Group.
    

Thursday, July 16, 2020

How to find latest Patch in oracle

Critical Patch Update (CPU) Program Jul 2020 Patch Availability Document (PAD)

Document ID: 2664876.1 (Available on My Oracle Support - MOS)

Overview

The Oracle Critical Patch Update (CPU) Program provides quarterly cumulative patches to address security vulnerabilities across various Oracle products. These patches are crucial for maintaining the security and stability of Oracle environments.

The Critical Patch Update (CPU) Program Jul 2020 Patch Availability Document (PAD) (Doc ID 2664876.1) specifically outlines the patches and minimum release requirements for the Critical Patch Updates (CPUs) and Patch Set Updates (PSUs) released on July 14, 2020.

Purpose and Scope

  • Purpose: This document defines the specific patches and minimum releases for the various Oracle product suites that received updates in the July 2020 CPU. This includes, but is not limited to:

    • Oracle Database Product Suite

    • Fusion Middleware Product Suite

    • Exalogic

    • Enterprise Manager Suite

  • Scope: This document is primarily intended for Database Administrators and other IT professionals responsible for applying quarterly security patches to Oracle systems.

Key Information in the PAD

The PAD (Doc ID 2664876.1) typically provides detailed information, including:

  • Affected Products and Versions: A comprehensive list of all Oracle products and their specific versions impacted by the July 2020 CPU.

  • Patch Numbers: The actual patch numbers (e.g., for Release Updates, Release Update Revisions, Bundle Patches, or PSUs) that need to be applied.

  • Dependencies and Prerequisites: Information on any prerequisite patches or minimum OPatch versions required before applying the July 2020 CPU.

  • Error Correction Information: Details on the final CPU program patch for products that have reached the end of their error correction support.

  • Modification History: As PADs are often updated, a modification history section tracks changes made to the document after its initial release.

Importance of Applying CPUs

Oracle strongly recommends that customers remain on actively supported versions of their products and apply Critical Patch Update security patches without delay. This is because these updates address vulnerabilities that could potentially be exploited maliciously, even if Oracle has already released patches for them.

Note: To access the full content of Doc ID 2664876.1, you must have a valid support contract with Oracle and log in to My Oracle Support (support.oracle.com).

Wednesday, December 18, 2019

Things to Know when you do 19c upgrade

Oracle 19c Upgrade Checklist and Best Practices

This document outlines key considerations and recommended steps when performing an upgrade to Oracle Database 19c, focusing on best practices for a smooth transition and optimal post-upgrade performance.

1. Recommended Upgrade Method

  • autoupgrade.jar: The autoupgrade.jar utility is the recommended and most robust way to perform Oracle 19c upgrades. It automates many pre-checks, pre-upgrade fixes, and post-upgrade tasks, simplifying the process and reducing manual errors.

2. Pre-Upgrade Checks

Before initiating the upgrade, ensure the following:

a. Dictionary Statistics

Verify that dictionary and fixed object statistics have been gathered recently. This is crucial for the optimizer's performance during and after the upgrade.

column OPERATION format a40
set linesize 200
select to_char(max(END_TIME),'DD-MON-YY hh24:mi') LATEST, OPERATION
from DBA_OPTSTAT_OPERATIONS
where OPERATION in ('gather_dictionary_stats','gather_fixed_objects_stats')
group by operation;

b. Stats on Clustered Indexes (If not using autoupgrade.jar)

If you are not using autoupgrade.jar (which typically handles this), it's recommended to gather statistics on critical SYS schema clustered indexes. This helps the optimizer in the new version.

exec dbms_stats.gather_schema_stats('SYS');
exec dbms_stats.gather_index_stats('SYS','I_OBJ#');
exec dbms_stats.gather_index_stats('SYS','I_FILE#_BLOCK#');
exec dbms_stats.gather_index_stats('SYS','I_TS#');
exec dbms_stats.gather_index_stats('SYS','I_USER#');
exec dbms_stats.gather_index_stats('SYS','I_TOID_VERSION#');
exec dbms_stats.gather_index_stats('SYS','I_MLOG#');
exec dbms_stats.gather_index_stats('SYS','I_RG#');

3. Post-Upgrade Actions

After the upgrade is complete, consider these immediate actions:

  • Adjust Stats History Retention:

    exec DBMS_STATS.ALTER_STATS_HISTORY_RETENTION(14);
    

    This sets the statistics history retention to 14 days.

  • Set Key Parameters in SPFILE:

    • _cursor_obsolete_threshold=1024

    • deferred_segment_creation=false

    • _sql_plan_directive_mgmt_control=0

    • Set optimizer_adaptive_statistics=FALSE explicitly in your SPFILE (It's recommended to explicitly set this to FALSE as adaptive statistics can sometimes lead to unexpected plan changes.)

4. Optimizer Parameters

  • COMPATIBLE and OPTIMIZER_FEATURES_ENABLE:

    • Ensure the COMPATIBLE parameter is set to the latest version (e.g., 19.0.0).

    • The OPTIMIZER_FEATURES_ENABLE parameter should also be set to the latest version ('19.1.0') to leverage the latest optimizer enhancements.

5. Performance Analysis and Tuning

a. Collect Execution Plans Before Upgrade

Capture existing execution plans to compare them after the upgrade and identify any regressions.

  • From Cursor Cache: Query V$SQL_PLAN or GV$SQL_PLAN for active and frequently executed SQL statements.

  • Using AWR: Analyze AWR reports for top SQL statements.

  • SQL Tuning Sets (STS): The most robust method. Create an STS from the AWR or cursor cache to capture SQL statements, their execution statistics, and execution plans.

    • This allows you to replay the workload later using SQL Performance Analyzer (SPA).

b. Compare AWR Snapshots

  • AWRDDRPT.sql: Use the AWRDDRPT.sql script (located in $ORACLE_HOME/rdbms/admin) to generate AWR Diff reports. This allows you to compare performance metrics between AWR snapshots taken before and after the upgrade.

  • Export AWR Data: You can export AWR data using the awrexp script (also in $ORACLE_HOME/rdbms/admin) to analyze it on a different database or for long-term storage.

c. SQL Tuning Sets (STS) and SQL Performance Analyzer (SPA)

  • Capture STS: Capture a representative workload into a SQL Tuning Set.

  • Load STS: Load this STS into the upgraded database.

  • SQL Performance Analyzer (SPA): Use SPA (part of Real Application Testing) to compare the performance of the SQL statements in the STS before and after the upgrade. SPA identifies SQL statements with plan changes or performance regressions.

d. SQL Plan Management (SPM)

SPM is a powerful feature to control and stabilize execution plans.

  • Configuration:

    • DBMS_SPM.CONFIGURE('PLAN_RETENTION_WEEKS', 5); (Default is 53 weeks)

    • DBMS_SPM.CONFIGURE('SPACE_BUDGET_PERCENT', 5); (Default is 10%)

  • Baseline Capture:

    • OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES = TRUE (Set this to start recording new plans as baselines. Remember to turn it off after capturing.)

  • Baseline Selection/Usage:

    • OPTIMIZER_USE_SQL_PLAN_BASELINES = TRUE (Ensures the optimizer uses existing baselines.)

    • OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES = FALSE (Turn off capture during normal operation.)

  • Evolution:

    • DBMS_SPM.REPORT_AUTO_EVOLVE_TASK: Reports on the automatic evolution task.

    • DBMS_SPM.CREATE_EVOLVE_TASK: Manually creates a task to evolve (verify and accept) new plans into baselines.

e. SQL Tuning Advisor (STA)

  • Utilize the SQL Tuning Advisor to analyze problematic SQL statements identified during post-upgrade testing. It can recommend various tuning actions, including new indexes, SQL profile creation, or SQL structure changes.

f. Export/Import STS to New DB

  • After capturing an STS from the source database, you can export it and import it into the target (upgraded) database for performance analysis.

  • DBMS_SPM.LOAD_PLANS_FROM_SQLSET: This procedure can be used to load plans from an STS into the SQL Plan Baseline (SPM) repository of the new database.

g. Workload Capture and Replay

  • Real Application Testing (RAT): This feature allows you to capture a real production workload from the source database and replay it on the upgraded database. This provides a highly accurate way to test the impact of the upgrade on performance.

    • SPA is a free feature, while Real Application Testing (which includes workload capture/replay) requires a separate license.

h. Automatic SPM (Exadata 19c)

  • On Exadata with Oracle 19c, Automatic SPM can further simplify SQL plan management by automatically managing baselines for frequently executed SQL.

By following these guidelines, you can significantly improve the success rate and performance stability of your Oracle 19c database upgrade.

-> Comptable (features) and optimiser_features_enable (use latest)  - Keep the latest

-> Collect execution plan before upgrade (cursor cache and AWR) [how to ?] [sql tunning sets]

-> compare AWR snapshots (AWRDDRPT.sql), You can export AWR data using the awrexp script in rdbms/admin

-> capture STS -> load STS (SQL performance analyser)

-> SPM  ( 53 week default - dbms_spm.configure('plan retention week',5) , (space_budget_percent',5)
   Baseline Capture -> optimiser_capture_sql_plan_baselines= TRUE (start recording ) 
   selection -> optimise_use_sql_plan_baselines= TRUE,OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE
   evolution -> dbms_spm.report_auto_evolve_task,  DBMS_SPM.CREATE_EVOLVE_TASK

-> SQL Tuning Advisor

-> export/import STS to new DB  

-> DBMS_SPM.LOAD_PLANS_FROM_SQLSET

-> capture workload, reply workload (compare)

-> SPA is Real Application Testing.SPM is a free feature

-> AUTOMATIC SPM - Exadata 19C 

Tuesday, December 3, 2019

How to upgrade DB from 12.1 to 19.3

Oracle Database 12.1 to 19.3 Upgrade Steps

This document provides a step-by-step guide for upgrading an Oracle Database from version 12.1 to 19.3. It outlines key actions, from software installation to post-upgrade tasks, to ensure a successful and efficient upgrade process.

Pre-Upgrade Preparations

  1. Install Oracle 19c Software:

    • Install the Oracle Database 19.3.0 software binaries into a new Oracle Home directory. Do not install it over your existing 12.1 Oracle Home. This new home will be referred to as 19.3_Home.

  2. Remove Obsolete init Parameters:

    • Review your current init.ora or SPFILE for any parameters that are no longer supported or are obsolete in Oracle 19c. Remove or adjust these parameters as necessary. Refer to Oracle documentation for a complete list of obsolete parameters.

  3. Stop the Listener:

    • Before proceeding with the database upgrade, stop the Oracle Listener associated with your 12.1 database.

    • lsnrctl stop

  4. Gather Dictionary and Fixed Objects Statistics:

    • It is critical to have up-to-date dictionary and fixed object statistics before starting the upgrade. This helps the upgrade process itself and ensures optimal performance post-upgrade.

    EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;
    EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
    
  5. Empty Recycle Bin:

    • Purge the recycle bin to avoid potential issues during the upgrade.

    PURGE DBA_RECYCLEBIN;
    
  6. Check and Update Time Zone File:

    • Verify the current time zone file version and update it to the latest version compatible with Oracle 19c if necessary. This is crucial for consistent time zone handling.

    • Refer to Oracle Support Note "Updating the Time Zone File and Timestamp with Time Zone Data in Oracle Database" for detailed instructions.

  7. Run Pre-Upgrade Information Tool:

    • Execute the preupgrade.jar tool from the 19c Oracle Home, pointing it to your 12.1 database. This tool performs a comprehensive analysis of your database for potential upgrade issues and generates fix-up scripts.

    (12.1_Home)/jdk/bin/java -jar (19.3_Home)/rdbms/admin/preupgrade.jar FILE TEXT DIR /home/oracle/upgrade
    
    • After running the tool, execute the generated fix-up script:

    @/home/oracle/upgrade/preupgrade_fixups.sql
    
    • Review the preupgrade.log and preupgrade_info.txt files for any remaining warnings or manual actions required.

Upgrade Execution

  1. Stop RAC Instances (if applicable):

    • If you are upgrading a Real Application Clusters (RAC) database, stop the second (and subsequent) RAC instances.

    • Disable the cluster on the first instance and then stop the first instance.

  2. Set Environment Variables for 19c:

    • Ensure your environment variables (especially ORACLE_HOME and PATH) are set to point to the new 19.3 Oracle Home.

  3. Start Database in Upgrade Mode:

    • Start the database from the 19.3 Oracle Home in upgrade mode.

    sqlplus / as sysdba
    startup upgrade
    
  4. Invoke Database Upgrade:

    • Execute the dbupgrade utility from the 19.3 Oracle Home. This command initiates the actual database upgrade process.

    (19.3_Home)/bin/dbupgrade
    
    • Monitor the output of this command closely for any errors or warnings.

Post-Upgrade Actions

  • After dbupgrade completes, the database will typically shut down.

  • Start the database in normal mode from the 19.3 Oracle Home.

  • Run the post-upgrade scripts and perform necessary post-upgrade checks as recommended by Oracle documentation (e.g., catupgrade.sql, utlrp.sql).

  • Re-enable cluster services and start all RAC instances if applicable.

  • Perform performance analysis and tuning as outlined in the "Oracle 19c Upgrade Checklist and Best Practices" document, including AWR comparisons, STS analysis, and SPM configuration.

Following these steps carefully will help ensure a successful upgrade of your Oracle 12.1 database to 19.3.

Wednesday, October 9, 2019

Wednesday, September 25, 2019

Oracle 19c Features and Enhancements

Oracle 19c Features and Enhancements

This document highlights several key features and enhancements introduced or improved in Oracle Database 19c, focusing on their functionality and practical implications.

1. Real-time Statistics (Exadata Only)

Oracle Database 19c introduces real-time statistics, extending online statistics gathering to include conventional DML statements. This feature allows statistics to be collected "on-the-fly" during INSERT, UPDATE, and MERGE operations.

  • Availability: This feature is only available on Exadata and Exadata Cloud Service.

  • Functionality: Statistics are gathered dynamically during conventional DML, providing the optimizer with up-to-date information for plan generation.

  • Parameters (Default TRUE):

    • _optimizer_gather_stats_on_conventional_dml: Controls whether real-time statistics are gathered.

    • _optimizer_use_stats_on_conventional_dml: Controls whether the optimizer uses real-time statistics.

    • _optimizer_stats_on_conventional_dml_sample_rate: Defaults to 100%, indicating the sampling rate for collection.

  • Disabling: You can set these parameters to FALSE to disable real-time statistics.

  • Impact on Dictionary Views:

    • USER_TABLES.NUM_ROWS: This column does not reflect real-time statistics changes.

    • USER_TAB_STATISTICS.NOTES: Will show STATS_ON_CONVENTIONAL_DML if real-time stats are active.

    • USER_TAB_COL_STATISTICS.NOTES: Will also show STATS_ON_CONVENTIONAL_DML for columns.

  • Limitations:

    • Direct Path INSERT ... SELECT: Real-time statistics have no effect.

    • DELETE operations: Real-time statistics have no effect.

    • Gathering statistics for a table (e.g., using DBMS_STATS) will wipe out the real-time statistics for that table.

2. High-Frequency Automatic Optimizer Statistics

This feature allows for more frequent, granular collection of optimizer statistics, improving the accuracy of execution plans.

  • Enabling/Configuring:

    EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_STATUS','ON'); -- Ensure auto tasks are on
    EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_INTERVAL','300'); -- Set interval to 300 seconds (5 minutes)
    
  • Interval: The minimum allowed value for AUTO_TASK_INTERVAL is 60 seconds, and the maximum is 900 seconds.

  • Monitoring: You can check the execution status of automatic statistics gathering tasks using:

    SELECT OPID, ORIGIN, STATUS, TO_CHAR(START_TIME, 'DD/MM HH24:MI:SS' ) AS BEGIN_TIME,
           TO_CHAR(END_TIME, 'DD/MM HH24:MI:SS') AS END_TIME, COMPLETED, FAILED,
           TIMED_OUT AS TIMEOUT, IN_PROGRESS AS INPROG
    FROM DBA_AUTO_STAT_EXECUTIONS
    ORDER BY OPID;
    

3. Validate SPFILE Parameters for Primary and Standby

Oracle 19c introduces a command to validate SPFILE parameters, which is particularly useful in Data Guard environments to ensure consistency between primary and standby databases.

  • Command:

    VALIDATE DATABASE {database-name} SPFILE;
    

    Replace {database-name} with the actual database name. This command checks for discrepancies or issues in the SPFILE.

4. Schema Only Accounts

Introduced earlier (from 9i onwards), schema-only accounts are a security best practice to restrict direct login access to schema owners.

  • Purpose:

    • Restrict Direct Access: Prevents users from directly logging in as the schema owner using shared credentials.

    • Proxy Connections: Users access the schema to perform DDL/DML changes via proxy connections, where a different user (with a password) connects and then proxies to the schema-only account.

    • Auditing: Allows for better auditing, as you can track which specific proxy user performed which tasks within the schema.

  • NO AUTHENTICATION Clause: When creating a schema-only account, the NO AUTHENTICATION clause is used. This allows a user to be created without a password, meaning direct connections are impossible, but proxy connections are enabled. This ensures that even if the schema owner account conceptually becomes "locked" (e.g., due to no password), proxy connections can still function.

5. Automatic Replication of Restore Points from Primary to Standby

In Oracle Data Guard, restore points can now be automatically replicated from the primary database to the standby database.

  • Mechanism: Restore points are replicated through the redo stream, which is then applied by the Managed Recovery Process (MRP) on the standby.

  • Primary Database State: For this replication to occur, the primary database must be in OPEN mode.

6. DML Operations on Active Data Guard Standby Databases

Oracle 19c enhances DML redirection capabilities for Active Data Guard, allowing DML operations to be performed on a read-only standby database by transparently redirecting them to the primary.

  • Enabling DML Redirect: This feature needs to be enabled on both the Primary and Standby databases.

    ALTER SYSTEM SET adg_redirect_dml=TRUE SCOPE=BOTH;
    
  • Connection Requirement: When connecting to the standby to perform DML that will be redirected, you must connect using a username/password. Connecting with sqlplus / as sysdba (OS authentication) will not work for DML redirection.

7. Inline External Table - EXTERNAL Clause (Zero DDL)

Oracle 19c introduces the ability to define external tables directly within a SQL query using the EXTERNAL clause, eliminating the need for separate DDL statements to create the external table object. This is often referred to as "Zero DDL" for external tables.

  • Concept: There is no need for an external table to be explicitly created as a database object. The definition is embedded directly in the SELECT statement.

  • Example: This example reads data from MY.txt (located in the MY_DIR directory), assuming it's a CSV file with three fields: object_id, owner, and object_name.

    SELECT *
    FROM EXTERNAL (
    (
    object_id NUMBER,
    owner VARCHAR2(128),
    object_name VARCHAR2(128)
    )
    TYPE oracle_loader
    DEFAULT DIRECTORY MY_DIR
    ACCESS PARAMETERS (
    RECORDS DELIMITED BY NEWLINE
    BADFILE MY_DIR
    LOGFILE MY_DIR:'inline_ext_tab_as_%a_%p.log'
    DISCARDFILE MY_DIR
    FIELDS CSV WITH EMBEDDED TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
    MISSING FIELD VALUES ARE NULL (
    object_id,
    owner,
    object_name)
    )
    LOCATION ('MY.txt')
    REJECT LIMIT UNLIMITED
    ) inline_ext_tab
    ORDER BY 1;
    
    • object_id NUMBER, owner VARCHAR2(128), object_name VARCHAR2(128): Defines the column structure of the external data.

    • TYPE oracle_loader: Specifies the access driver.

    • DEFAULT DIRECTORY MY_DIR: Specifies the database directory object where the external file is located.

    • ACCESS PARAMETERS (...): Defines how the data is parsed (e.g., RECORDS DELIMITED BY NEWLINE, FIELDS CSV).

    • LOCATION ('MY.txt'): Specifies the external data file.

    • REJECT LIMIT UNLIMITED: Allows all rows to be processed, even if some have errors.

    • inline_ext_tab: This is an alias for the inline external table definition.

Wednesday, July 31, 2019

Udev rules - SYMLINK - Device Persistence - Oracle ASM - Linux

Udev Rules for Oracle ASM Device Persistence on Linux

This document explains how to configure Udev rules in Linux to ensure persistent device naming for Oracle Automatic Storage Management (ASM) disks. This is crucial for maintaining stable disk paths across reboots, which is a requirement for ASM.

Understanding Device Persistence with Udev

In Linux, device names like /dev/sda, /dev/sdb, etc., are not guaranteed to be consistent across reboots. This can cause issues for applications like Oracle ASM, which rely on stable paths to storage. Udev is a device manager for the Linux kernel that allows you to define rules to create persistent symbolic links (symlinks) to your disks, ensuring they always have the same, predictable name regardless of their boot-time enumeration.

Steps to Configure Udev Rules for ASM

This example demonstrates the process for a single disk (/dev/sda), but the principles apply to multiple disks.

1. Present Raw Disks

Ensure that the raw disks intended for Oracle ASM are presented to the Linux operating system. These disks should be unpartitioned and not formatted with any file system.

2. Identify the SCSI ID of the Disk

The SCSI ID (or ID_SERIAL) provides a unique, persistent identifier for the disk. This is the key to creating a stable Udev rule.

  • Command:

    /lib/udev/scsi_id -g -u -d /dev/sda
    

    (Replace /dev/sda with the actual device path of your raw disk.)

  • Example Output:

    3600224800cbc991b76c2a957f833fc66
    

    This hexadecimal string is the unique SCSI ID for the disk.

3. Create/Update the Udev Rules File

Create a new Udev rules file (e.g., 99-asm.rules) in the /etc/udev/rules.d/ directory. The 99 prefix ensures that these rules are processed late in the Udev sequence, typically after other system-generated rules.

  • File: /etc/udev/rules.d/99-asm.rules

  • Content Example (for one disk):

    KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="3600224800cbc991b76c2a957f833fc66", SYMLINK+="asmdatadisk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
    
    • KERNEL=="sd*": Matches any block device with a name starting with sd (e.g., sda, sdb, sdc).

    • SUBSYSTEM=="block": Specifies that the rule applies to block devices.

    • ENV{DEVTYPE}=="disk": Ensures the rule applies only to whole disks, not partitions.

    • ENV{ID_SERIAL}=="3600224800cbc991b76c2a957f833fc66": This is the critical part. It matches the unique SCSI ID obtained in step 2.

    • SYMLINK+="asmdatadisk1": Creates a symbolic link named asmdatadisk1 in /dev/ (e.g., /dev/asmdatadisk1) that points to the actual device (e.g., /dev/sda). The += ensures that if other rules also create symlinks, this one is added.

    • OWNER="grid", GROUP="asmadmin", MODE="0660": Sets the ownership and permissions of the symlink.

      • OWNER="grid": Sets the owner to the grid OS user (typically the Oracle Grid Infrastructure owner).

      • GROUP="asmadmin": Sets the group to asmadmin (the ASM administrative group).

      • MODE="0660": Sets permissions to read/write for owner/group, and no access for others.

  • Important Note for Cloud Databases: The ENV{ID_SERIAL} attribute might not be consistently available or reliable for device persistence in some cloud environments (e.g., certain cloud-specific block storage types). In such cases, other attributes like ID_PATH or ID_WWN might be more appropriate, or cloud provider-specific persistent naming mechanisms should be used. The commented-out line in the original input (#ACTION=="add|change", ENV{ID_SCSI_SERIAL}=="...", SYMLINK+="asmdatadisk1", OWNER="grid", GROUP="asmadmin", MODE="0660") suggests an alternative that might be considered, but ID_SERIAL is generally preferred for on-premises setups.

4. Reload Udev Rules

After modifying the Udev rules file, you must reload the Udev rules and trigger the Udev system to apply the changes without requiring a system reboot.

  • Command:

    udevadm control --reload-rules && udevadm trigger
    

After these steps, you should see the new symlink in the /dev/ directory, pointing to your raw disk, and it will persist across reboots. You can then use this persistent symlink (e.g., /dev/asmdatadisk1) when configuring your ASM disk groups.

Tuesday, June 25, 2019

Flashback Data Archive (Oracle Total Recall)

Oracle Flashback Data Archive (Total Recall) Configuration

Oracle Flashback Data Archive (FDA), also known as Oracle Total Recall, provides the ability to track and store historical changes to table data. This feature allows users to query past states of data without relying on traditional backup and recovery mechanisms.

Flashback Data Archive in Oracle 11g

In Oracle Database 11g, FDA management is primarily performed at the individual table level.

STEP 1: Create a Tablespace for FDA

A dedicated tablespace is required to store the historical data.

CREATE TABLESPACE FDA DATAFILE '+DATA_DG' SIZE 100M AUTOEXTEND ON NEXT 100M MAXSIZE 30G;

STEP 2: Create Flashback Archive

Define the flashback archive, specifying its tablespace and retention period.

CREATE FLASHBACK ARCHIVE DEFAULT TABLESPACE FDA QUOTA 10G RETENTION YEAR;

STEP 3: Enable FDA on Respective Table

Enable FDA for a specific table.

ALTER TABLE your_table_name FLASHBACK ARCHIVE; -- Replace 'your_table_name' with the actual table name

STEP 4: Check FDA is Enabled on Respective Table

Verify the FDA status for tables.

SELECT owner_name, table_name, flashback_archive_name, archive_table_name, status
FROM dba_flashback_archive_tables
ORDER BY owner_name, table_name;

Flashback Data Archive in Oracle 12c and Later

Starting with Oracle Database 12c, FDA introduces the concept of managing logical groups of tables at an application level. This simplifies enabling or disabling FDA for multiple related tables.

STEP 1: Create a Tablespace for FDA

Similar to 11g, create a tablespace for FDA.

CREATE TABLESPACE FDA DATAFILE '+DATA_DG' SIZE 100M AUTOEXTEND ON NEXT 100M MAXSIZE 30G;

STEP 2: Create Flashback Archive and Grant Privileges

Create the flashback archive and grant necessary privileges to the user who will manage FDA.

CREATE FLASHBACK ARCHIVE FDA_NAME TABLESPACE FDA QUOTA 10G RETENTION YEAR; -- FDA_NAME is the name of your flashback archive
GRANT FLASHBACK ARCHIVE ON FDA_NAME TO XYZ; -- Replace 'XYZ' with the username
GRANT FLASHBACK ARCHIVE ADMINISTRATOR TO XYZ;

Step 3: Enable FDA on Respective Table

Enable FDA for a specific table, associating it with the named flashback archive.

ALTER TABLE TEST FLASHBACK ARCHIVE FDA_NAME; -- Replace 'TEST' with your table name and 'FDA_NAME' with your archive name

Step 4: Set Context Level for Transaction Data

To ensure that context information (e.g., SYS_CONTEXT attributes) is stored with the transaction data, use DBMS_FLASHBACK_ARCHIVE.SET_CONTEXT_LEVEL.

BEGIN
  DBMS_FLASHBACK_ARCHIVE.SET_CONTEXT_LEVEL(level=>'ALL');
END;
/

Create a Group and Add a Table (Application Level Management)

Oracle 12c allows you to group tables under an application name for easier management.

-- Register an application for FDA
EXEC DBMS_FLASHBACK_ARCHIVE.REGISTER_APPLICATION(application_name=>'FDA_APP', flashback_archive_name=>'FDA_NAME');

-- Add a table to the registered application
EXEC DBMS_FLASHBACK_ARCHIVE.ADD_TABLE_TO_APPLICATION(application_name=>'FDA_APP', table_name=>'TEST', schema_name=>'XYZ');

Enable/Disable Group

You can enable or disable FDA for an entire application group.

-- Disable FDA for the application group
EXEC DBMS_FLASHBACK_ARCHIVE.DISABLE_APPLICATION(application_name=>'FDA_APP');

-- Enable FDA for the application group
EXEC DBMS_FLASHBACK_ARCHIVE.ENABLE_APPLICATION(application_name=>'FDA_APP');

Friday, May 31, 2019

CDB/PDB - 12.2, 18c, 19c new features

Oracle CDB/PDB Features and Management (12.2, 18c, 19c)

This document outlines key features and management operations for Container Databases (CDBs) and Pluggable Databases (PDBs) in Oracle Database versions 12.2, 18c, and 19c.

1. Creating a Container Database (CDB)

A CDB is the root container that hosts multiple PDBs.

CREATE DATABASE ... ENABLE PLUGGABLE DATABASE;

This command will create CDB$ROOT (the root container) and PDB$SEED (a template for creating new PDBs).

To verify the CDB status:

SELECT NAME, CDB, CON_ID FROM V$DATABASE;

2. Creating an Application Root (12.2, 18c, 19c)

An Application Root acts as a parent container for application PDBs, allowing for centralized management of common application data and metadata.

Connect to the CDB root before creating:

CONNECT TO the_cdb_root_user_as_sysdba; -- Example: CONNECT SYS/password@cdb_name AS SYSDBA;
CREATE PLUGGABLE DATABASE your_app_root_name AS APPLICATION CONTAINER;

3. Creating an Application Seed (12.2, 18c, 19c)

An Application Seed is a template PDB within an Application Root, used for creating new application PDBs.

Ensure the current container is the application root:

ALTER SESSION SET CONTAINER = your_app_root_name;
CREATE PLUGGABLE DATABASE AS SEED;

4. Creating a PDB from Seeds (12.2, 18c, 19c)

You can create new PDBs from PDB$SEED (in CDB$ROOT) or from an Application Seed (in an Application Root).

-- Connect to the CDB root
ALTER SESSION SET CONTAINER=CDB$ROOT;

-- Create a new PDB
CREATE PLUGGABLE DATABASE mypdbnew
  ADMIN USER MY_DBA IDENTIFIED BY "####"
  STORAGE (MAXSIZE UNLIMITED) -- Example storage clause
  DEFAULT TABLESPACE mypdbnew_tbs
  DATAFILE '/path/to/datafiles/mypdbnew01.dbf' SIZE 100M
  FILE_NAME_CONVERT =('/path/to/seed_datafiles/','/path/to/new_pdb_datafiles/'); -- Adjust paths
  -- Example: FILE_NAME_CONVERT =('/u01/app/oracle/oradata/CDB1/pdbseed/','/u01/app/oracle/oradata/CDB1/mypdbnew/');

-- Open the new PDB
ALTER PLUGGABLE DATABASE mypdbnew OPEN;

-- Verify user information in the CDB
SELECT con_id, username, default_tablespace, common FROM cdb_users WHERE con_id = (SELECT con_id FROM v$pdbs WHERE name = 'MYPDBNEW');

5. Moving PDBs / Plugging In a PDB (12.2, 18c, 19c)

This process involves unplugging a PDB from one CDB and plugging it into another, or into the same CDB.

-- Close the PDB to be unplugged
ALTER PLUGGABLE DATABASE mypdb CLOSE IMMEDIATE;

-- Unplug the PDB, creating an XML manifest file
ALTER PLUGGABLE DATABASE mypdb UNPLUG INTO '/backup_location/mypdb.xml';

This command creates mypdb.xml (the manifest file) and leaves the PDB's datafiles in their current location.

  • Move Files: Manually move the PDB's datafiles (e.g., system.dbf, sysaux.dbf, mypdb.dbf) and the mypdb.xml file to the new desired location.

  • Check Compatibility (in the target CDB):

    -- Connect to the target CDB root
    ALTER SESSION SET CONTAINER=CDB$ROOT;
    -- Run the check (no output means compatible, errors indicate issues)
    SELECT DBMS_PDB.CHECK_PLUG_COMPATIBILITY('/path/to/new_location/mypdb.xml') FROM DUAL;
    
  • Create PDB by Plugging In:

    CREATE PLUGGABLE DATABASE mypdb
      USING '/path/to/new_location/mypdb.xml'
      SOURCE_FILE_NAME_CONVERT =('/old/path/','/new/path/') -- Adjust paths as needed
      NOCOPY; -- Use NOCOPY if datafiles are already moved to the target location
    
  • Open the Plugged-In PDB:

    ALTER PLUGGABLE DATABASE mypdb OPEN;
    

    If in a RAC environment, ensure it's opened on all relevant nodes.

6. Cold Clone from Another PDB (12.2, 18c, 19c)

A cold clone creates a new PDB from an existing PDB while the source PDB is closed or in read-only mode.

-- Open source PDB in read-only mode (required for cold clone)
ALTER PLUGGABLE DATABASE mypdb OPEN READ ONLY FORCE;

-- Create the clone PDB
CREATE PLUGGABLE DATABASE mypdb_test
  FROM mypdb
  FILE_NAME_CONVERT =('/path/to/source_datafiles/','/path/to/clone_datafiles/'); -- Adjust paths

-- Open the cloned PDB
ALTER PLUGGABLE DATABASE mypdb_test OPEN;

(Note: For a Hot Clone, the source PDB does not need to be in read-only mode.)

7. Cold Clone from Non-CDB (12.2, 18c, 19c)

You can convert a non-CDB into a PDB and plug it into a CDB.

  1. Generate XML Manifest from Non-CDB:

    • Connect to the non-CDB as SYSDBA.

    EXEC DBMS_PDB.DESCRIBE(pdb_name => 'NONCDB_NAME', xml_file => '/path/to/noncdb.xml');
    
  2. Create PDB in CDB using XML:

    • Move the non-CDB's datafiles and the generated noncdb.xml file to the target CDB's desired location.

    • Connect to the target CDB root.

    CREATE PLUGGABLE DATABASE noncdb_pdb
      USING '/path/to/noncdb.xml'
      SOURCE_FILE_NAME_CONVERT =('/old/noncdb/datafiles/','/new/pdb/datafiles/')
      NOCOPY;
    
  3. Run noncdb_to_pdb.sql: After plugging in, this script must be run inside the new PDB to complete the conversion.

    ALTER PLUGGABLE DATABASE noncdb_pdb OPEN;
    ALTER SESSION SET CONTAINER = noncdb_pdb;
    @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
    

8. Dropping a Pluggable Database (12.2, 18c, 19c)

-- Close the PDB first
ALTER PLUGGABLE DATABASE mypdb CLOSE;

-- Drop the PDB and its associated datafiles
DROP PLUGGABLE DATABASE mypdb INCLUDING DATAFILES;

9. Hot Clone a Remote PDB or Non-CDB (12.2+)

This feature allows cloning a PDB or non-CDB from a remote database over a database link. The source must use local undo mode.

-- Create a database link to the remote source database
CREATE DATABASE LINK db_link CONNECT TO remote_user IDENTIFIED BY remote_user_password USING 'remotedb_service_name';

-- Create the new PDB as a clone from the remote source
CREATE PLUGGABLE DATABASE newpdb FROM remotedb@db_link;

-- Open the new PDB
ALTER PLUGGABLE DATABASE newpdb OPEN;

-- If cloning from a non-CDB, run the conversion script
-- ALTER SESSION SET CONTAINER = newpdb;
-- @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

10. Relocate PDB (12.2+)

Relocating a PDB moves it from one CDB to another while keeping the PDB online during most of the operation.

-- Create a database link to the source CDB
CREATE DATABASE LINK db_link CONNECT TO remote_user IDENTIFIED BY remote_user_password USING 'source_cdb_service_name';

-- Create the PDB in the target CDB and initiate relocation
CREATE PLUGGABLE DATABASE remote_pdb_name FROM remote_pdb_name@db_link RELOCATE;

-- Open the relocated PDB in the target CDB
ALTER PLUGGABLE DATABASE remote_pdb_name OPEN;

11. PDB Archive Files (12.2+)

PDB archive files (.pdb extension) are self-contained archives that include both the PDB's XML manifest and its datafiles. This simplifies PDB transport.

-- Unplug the PDB into an archive file
ALTER PLUGGABLE DATABASE mypdb UNPLUG INTO '/backup_location/mypdb.pdb';
```mypdb.pdb` is an archive containing both the `.xml` file and the datafiles.

```sql
-- Create a PDB from an archive file
CREATE PLUGGABLE DATABASE mypdb_test USING '/backup_location/mypdb.pdb';

12. PDB Refresh (12.2+)

Refreshable PDB clones allow a PDB to be periodically updated from a remote source PDB.

-- Create a database link to the source PDB
CREATE DATABASE LINK db_link CONNECT TO remote_user IDENTIFIED BY remote_user_password USING 'remote_pdb_service_name';

-- Create a refreshable PDB clone in manual refresh mode
CREATE PLUGGABLE DATABASE newpdb FROM remote_pdb_name@db_link REFRESH MODE MANUAL;

-- Open the refreshable PDB (initially read-only)
ALTER PLUGGABLE DATABASE newpdb READ ONLY;

-- To perform a manual refresh:
ALTER PLUGGABLE DATABASE newpdb CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE newpdb REFRESH;
ALTER PLUGGABLE DATABASE newpdb READ ONLY; -- PDB will be read-only after refresh

  • Auto-refresh:

    ALTER PLUGGABLE DATABASE newpdb REFRESH MODE EVERY 120 MINUTES;
    

    This auto-refresh only occurs if the PDB is closed.

13. Proxy PDB (12.2+)

A Proxy PDB acts as a pointer to a PDB in a remote CDB, allowing local access to a remote PDB without actually moving its datafiles.

  • Benefits:

    • Existing client connections unchanged: Clients can connect to the proxy PDB as if it were local.

    • Single entry point for cloud DB: Simplifies access to remote databases, especially in cloud environments.

    • Share an application root container: Enables sharing of an application root's content across multiple containers.

  • Example (CDB 1 instance):

    -- Create a database link to the remote CDB
    CREATE DATABASE LINK db_clone_link CONNECT TO c##remote_clone_user IDENTIFIED BY remote_clone_user_password USING 'CDB2_SERVICE_NAME';
    
    -- Create the proxy PDB
    CREATE PLUGGABLE DATABASE PDB2_PROXY AS PROXY FROM PDB2@db_clone_link;
    
    -- Verify the proxy PDB
    SELECT pdb_name, is_proxy_pdb, status FROM dba_pdbs WHERE pdb_name = 'PDB2_PROXY';
    

    No longer DB link & link user required for subsequent access after creation.

  • Accessing the Proxy PDB:

    ALTER SESSION SET CONTAINER=PDB2_PROXY;
    -- Now you are effectively connected to the remote PDB2 via the proxy.
    -- DDL and DML operations performed here will execute on the remote CDB2 instance.
    

14. Snapshot Carousel (PDB Archives) (18c+)

Snapshot Carousel provides a repository for periodic point-in-time copies (snapshots) of a PDB, enabling easy recovery or cloning to a specific point in time.

-- Create a PDB with snapshot mode enabled
CREATE PLUGGABLE DATABASE pdb_snap
  ADMIN USER MY_DBA IDENTIFIED BY "####"
  SNAPSHOT MODE EVERY 24 HOURS; -- Automatically creates a snapshot every 24 hours

-- Open the PDB
ALTER PLUGGABLE DATABASE pdb_snap OPEN;

-- MAX_PDB_SNAPSHOT can be changed between 0 to 8 (0 will delete existing snapshots)
ALTER PLUGGABLE DATABASE pdb_snap SET MAX_PDB_SNAPSHOTS = 5;

-- Manual snapshot creation
ALTER PLUGGABLE DATABASE pdb_snap SNAPSHOT xyz_snap;

-- Create a new PDB from one of the archives (snapshots)
CREATE PLUGGABLE DATABASE pdb_from_snap
  FROM pdb_snap
  USING SNAPSHOT xyz_snap;

15. Transportable Backups (18c+)

This feature supports using backups performed on a PDB before it is unplugged and plugged into a new container. This significantly streamlines PDB relocation for purposes like load balancing or migration between on-premises and cloud, as it avoids the need for new backups immediately before and after each PDB move.

16. Switchover Refreshable Clone PDB between CDBs (Migration) (18c+)

This allows for a planned or unplanned switchover of a refreshable clone PDB between different CDBs, facilitating PDB migration with minimal downtime.

  • Planned Switchover:

    ALTER PLUGGABLE DATABASE your_pdb_name REFRESH MODE EVERY 2 MINUTES FROM remote_pdb_name@dblink SWITCHOVER;
    

    This command prepares the PDB for a switchover, automatically refreshing it.

  • Unplanned Switchover (after a planned switchover setup):

    ALTER PLUGGABLE DATABASE your_pdb_name REFRESH; -- Perform a final refresh
    ALTER PLUGGABLE DATABASE your_pdb_name REFRESH MODE NONE; -- Disable refresh mode
    ALTER PLUGGABLE DATABASE your_pdb_name OPEN READ WRITE; -- Open the PDB in read-write mode
    

17. Transient No-Standby PDBs (Clone) (18c+)

This feature allows creating a hot clone of a PDB without it being replicated to a standby database, useful for temporary testing or development environments.

  • Hot clone to transient PDB:

    CREATE PLUGGABLE DATABASE transient_pdb FROM source_pdb CLONE STANDBY=NONE;
    
  • Cold clone of this transient PDB with standby: You can then create a cold clone of this transient PDB that does include standby replication.

  • Drop transient PDB:

    DROP PLUGGABLE DATABASE transient_pdb INCLUDING DATAFILES;
    

18. AWR for Pluggable Database (12.2 onwards)

Starting from Oracle 12.2, AWR (Automatic Workload Repository) data can be collected at the PDB level, providing granular performance insights for individual PDBs.

  • Enable AWR Auto Flush for PDBs:

    ALTER SYSTEM SET awr_pdb_autoflush_enabled=TRUE;
    
  • View PDB Snapshots:

    SELECT * FROM awr_pdb_snapshot;