Showing posts with label 19C. Show all posts
Showing posts with label 19C. Show all posts

Wednesday, January 1, 2025

mailx vs. s-nail

mailx vs. s-nail: A Comparison

mailx and s-nail are command-line utilities used for sending and receiving emails on Unix-like operating systems. While they serve the same fundamental purpose, s-nail is often considered a modern, enhanced successor to the traditional mailx.

mailx

mailx is a traditional command-line mail utility that has been around for a long time. It's part of the POSIX standard and is widely available on most Unix-like systems. Note: mailx is deprecated in Red Hat Enterprise Linux (RHEL) 9.

Key Characteristics of mailx:

  • Traditional: It provides basic functionality for composing, sending, and reading emails from the command line.

  • Simplicity: Its syntax and features are relatively straightforward, making it easy for basic email tasks.

  • Standard: As a POSIX standard utility, its behavior is generally consistent across different systems.

  • Limited Features: Compared to more modern mail clients, mailx has limited features for handling complex email scenarios, advanced authentication, or modern protocols.

  • Security: Older versions might have limitations regarding modern encryption standards or authentication methods.

Common Use Cases for mailx:

  • Sending simple notifications or alerts from scripts.

  • Basic email composition and sending from the command line.

  • Reading local mailboxes.

Usage Examples for mailx:

  • Sending a simple email:

    echo "This is the body of the email." | mailx -s "Subject of the Email" recipient@example.com
    
  • Reading local mailbox:

    mailx
    

s-nail

s-nail is a modern, feature-rich mail client that aims to be a compatible replacement for mailx while offering significant enhancements. It is often found as the default mailx implementation on newer Linux distributions (e.g., Debian, Ubuntu, RHEL/CentOS 8+).

Key Characteristics of s-nail:

  • Enhanced Functionality: s-nail extends the capabilities of mailx with features like:

    • Improved Protocol Support: Better support for SMTP, IMAP, POP3, and SMTPS/IMAPS/POP3S (secure versions).

    • Advanced Authentication: Support for various authentication mechanisms (e.g., PLAIN, LOGIN, CRAM-MD5, NTLM).

    • TLS/SSL Support: Robust handling of encrypted connections for secure email transmission.

    • MIME Support: Better handling of MIME types for attachments and rich text.

    • Internationalization: Improved handling of character sets and encodings.

    • Configuration: More flexible and powerful configuration options via ~/.s-nailrc or system-wide files.

  • mailx Compatibility: s-nail strives to be largely command-line compatible with mailx, meaning scripts written for mailx often work seamlessly with s-nail.

  • Security Focus: Designed with modern security considerations in mind, making it a more secure choice for sensitive email operations.

  • Active Development: It benefits from ongoing development and improvements.

Common Use Cases for s-nail:

  • Sending emails from scripts with advanced requirements (e.g., secure connections, specific authentication).

  • Acting as a robust command-line email client for users who prefer a terminal-based interface.

  • Automated reporting and alerting in modern environments where security and protocol support are critical.

Usage Examples for s-nail:

  • Sending an email with an attachment (using a common mailx compatible syntax):

    echo "Please find the report attached." | s-nail -s "Daily Report" -a /path/to/report.pdf recipient@example.com
    
  • Sending an email via a specific SMTP server with authentication:

    s-nail -v -S smtp-use-starttls -S smtp=smtp.example.com:587 \
           -S smtp-auth=login -S smtp-auth-user=your_username \
           -S smtp-auth-password=your_password \
           -s "Secure Email Test" recipient@example.com <<EOF
    This is a secure email sent via s-nail.
    EOF
    

Comparison Summary

Feature

mailx (Traditional)

s-nail (Modern)

Protocol Support

Basic SMTP, local mailboxes

SMTP, SMTPS, IMAP, IMAPS, POP3, POP3S, local mailboxes

Authentication

Limited

Extensive (PLAIN, LOGIN, CRAM-MD5, NTLM, etc.)

Encryption (TLS/SSL)

Often limited or external configuration

Built-in and robust TLS/SSL support

MIME Support

Basic, often requires external tools for complex attachments

Improved, better handling of various MIME types and attachments

Configuration

Simpler, often via command-line options or basic ~/.mailrc

More powerful, extensive options via ~/.s-nailrc and system-wide files

Development Status

Mature, less active development

Actively developed and maintained

Default on Systems

Older Unix/Linux systems (e.g., CentOS 7, older BSDs)

Newer Linux distributions (e.g., Debian, Ubuntu, RHEL/CentOS 8+, often aliased as mailx)

Security

May have limitations with modern security requirements

Designed with modern security in mind

Conclusion

While mailx remains a functional tool for basic email tasks, s-nail is the preferred choice for modern environments due to its expanded feature set, improved protocol support, and enhanced security capabilities. On many contemporary Linux distributions, the mailx command itself often points to the s-nail binary, providing a seamless upgrade path for users and scripts.

Monday, July 12, 2021

ORA-01017: invalid username/password; logon denied while creating DB using DBCA

Problem: DBCA ORA-01017 due to OS user/group permissions.

Verify:

  1. oracle User: Ensure you're running DBCA as the oracle OS user (or grid for Grid Infrastructure).
  2. oinstall Group: Check if oracle (and grid) are in the oinstall group.
    • Command: getent group oinstall | egrep -i 'oracle|grid'
    • Fix (if missing, as root): usermod -a -G oinstall oracle (repeat for grid)
    • Action: Log out and back in as oracle (or grid) after changes.
  3. Other Groups: Confirm oracle is in dba and other relevant groups (e.g., asmdba for ASM).
    • Command: id -Gn oracle

Summary: Ensure oracle (or grid) OS user is correctly part of oinstall and other necessary Oracle groups.

Sunday, October 11, 2020

Application Continuity for the Oracle Database

Draining and Rebalancing Sessions for Planned Maintenance

For planned database maintenance, effectively managing active sessions is crucial to minimize disruption. Oracle provides several mechanisms to drain sessions and rebalance workloads, especially when integrated with connection pools and mid-tiers.

Session Draining

Session draining is a technique used to gracefully terminate sessions, allowing ongoing work to complete before the underlying resource (like a PDB or instance) is taken offline.

  • Fast Application Notification (FAN): Widely used with Oracle connection pools and mid-tiers, FAN enables the database to notify applications about upcoming changes (e.g., a node going down). This allows applications to drain their connections from the affected instance.
  • Database-Initiated Draining (Oracle Database 18c+): Starting with Oracle Database 18c, the database automatically drains sessions when PDBs or instances are stopped or relocated. This simplifies the draining process for administrators.

Failover Solutions

When sessions cannot be drained within the allocated time, failover solutions act as a fallback to ensure application continuity.

Transparent Application Failover (TAF)

TAF provides basic failover capabilities by creating a new session if the primary connection is lost.

  • SELECT Mode: When configured with SELECT mode, TAF can replay queries from the point of failure, allowing read-only operations to resume seamlessly on a new connection.
  • FAILOVER_RESTORE (12.2.0.1): This setting (Level1 by default, meaning None) automatically restores the initial common state before replaying a request.
  • FAILOVER_TYPE=AUTO (18c+): From Oracle Database 18c, setting FAILOVER_TYPE=AUTO automatically configures FAILOVER_RESTORE=AUTO, which is equivalent to Level1 behavior, providing more automated session restoration.

Application Continuity (AC)

Application Continuity is a more advanced failover solution designed to hide outages from applications, ensuring ongoing user experience.

  • Coverage: AC was introduced in Oracle Database 12.1 for thin Java applications and extended to OCI and ODP.NET based applications in Oracle Database 12.2.0.1.
  • Session Recovery: AC goes beyond TAF by rebuilding the entire session, including both session states (e.g., NLS settings, temporary tables) and transactional states, from a known good point after a recoverable outage.

Transparent Application Continuity (TAC)

Introduced in Oracle Database 18c, Transparent Application Continuity further enhances AC.

  • Automated State Tracking: TAC transparently tracks and records the session and transactional state. This automation allows the database session to be recovered following a recoverable outage without requiring explicit application code changes for state management.

https://www.oracle.com/technetwork/database/options/clustering/applicationcontinuity/applicationcontinuityformaa-6348196.pdf

Monday, August 31, 2020

All about oracle-database-preinstall-19c (Oracle Linux 8, 7)

Installing oracle-database-preinstall-19c RPM

This document outlines the process for downloading and installing the oracle-database-preinstall-19c RPM package, and explains its effects on your Oracle Linux system.

1. How to Download

The oracle-database-preinstall-19c RPM is available directly from Oracle's YUM repositories. Choose the correct link based on your Oracle Linux version:

  • For Oracle Linux 8 (OEL8): https://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-1.el8.x86_64.rpm

  • For Oracle Linux 7 (OEL7 - Latest): https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-2.el7.x86_64.rpm

  • For Oracle Linux 7 (OEL7 - Older Version): https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm

To download the RPM: You can use wget or curl directly from your Linux server. For example, for OEL8:

Bash
wget https://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-1.el8.x86_64.rpm

Note: It's generally recommended to use yum install oracle-database-preinstall-19c (or dnf install on OL8) as it will automatically download the correct version and handle dependencies.

2. How to Install

Once downloaded, you can install the RPM package using the yum (or dnf on OL8) command as root.

Command:

Bash
sudo yum install oracle-database-preinstall-19c-1.0-1.el8.x86_64.rpm
# Replace the RPM filename with the one you downloaded for your specific OS version.

Using yum directly (recommended for dependency resolution):

Bash
sudo yum install oracle-database-preinstall-19c
# Or on OL8:
sudo dnf install oracle-database-preinstall-19c

To verify installation:

Bash
rpm -qa | grep oracle-database-preinstall-19c

(Note: The command rpm -qa oracle-database-preinstall-19c-1.0-1.el8.x86_64.rpm you provided would only work if the full filename matches exactly and is not the standard way to query all installed packages containing a string.)

3. What it Will Do

The oracle-database-preinstall-19c RPM automates many of the prerequisite configurations required for Oracle Database 19c installation:

  • Create an oracle user: A dedicated operating system user named oracle is created for database administration.
  • Create groups: Essential OS groups such as oinstall, dba, backupdba, dgdba, kmdba, and racdba are created.
  • Update kernel parameters: System kernel parameters (e.g., shmmax, shmall, semaphores, file-max, ip_local_port_range) are adjusted to recommended values for Oracle Database.
  • Install dependencies: Installs various dependent packages required by Oracle Database.

4. What Happens if Existing User/Group Exists?

The preinstall RPM is designed to handle existing configurations gracefully:

  • Existing oracle user: If an oracle user already exists, the RPM will ignore its creation. It will not modify the existing user's home directory, UID, or primary group unless specifically necessary for group membership.
  • Existing groups: If any of the required groups (e.g., oinstall, dba) already exist, the RPM will add the oracle user to these groups if they are not already members. It will not recreate or modify the GIDs of existing groups. New required groups that don't exist will be created.
  • Kernel parameters: The RPM will replace existing kernel parameters (typically found in /etc/sysctl.conf or a file under /etc/sysctl.d/) with the recommended values for Oracle Database 19c. It's crucial to review these changes if you have custom kernel parameter settings.

Monday, August 24, 2020

Cloud backup - oracle - Dell - DDVE

Dell Data Protection Solutions for Database Workloads

This document provides a brief overview of key Dell Data Protection components relevant to database workloads and data management.

Components Overview

  • Dell Data Domain Virtual Edition (DDVE):

    • Primarily designed for Database (DB) workloads. It provides software-defined data protection with deduplication.

  • Dell Cloud Snapshot Manager (CSM):

    • Focuses on Application (App) workloads. CSM helps manage snapshots for applications, particularly in cloud environments.

  • Data Domain Management Center (DDMC):

    • Used to manage multiple DDVE instances from a centralized console, simplifying administration of a Data Domain estate.

  • PowerProtect Management Center (PPDM):

    • A comprehensive solution offering capabilities for data protection, backup scheduling, deduplication, operational agility, self-service, and IT governance. PPDM provides a unified platform for managing enterprise data protection.

Key Considerations for Database Backups

  • app-optimized-compression: This feature is associated with oracle1 (likely referring to a specific Oracle integration or policy) to optimize compression for database backups.

  • "File per set" configuration: For optimal performance, "File per set" should not be specified during database backup configurations, as it can introduce performance bottlenecks.

Wednesday, December 18, 2019

Things to Know when you do 19c upgrade

Oracle 19c Upgrade Checklist and Best Practices

This document outlines key considerations and recommended steps when performing an upgrade to Oracle Database 19c, focusing on best practices for a smooth transition and optimal post-upgrade performance.

1. Recommended Upgrade Method

  • autoupgrade.jar: The autoupgrade.jar utility is the recommended and most robust way to perform Oracle 19c upgrades. It automates many pre-checks, pre-upgrade fixes, and post-upgrade tasks, simplifying the process and reducing manual errors.

2. Pre-Upgrade Checks

Before initiating the upgrade, ensure the following:

a. Dictionary Statistics

Verify that dictionary and fixed object statistics have been gathered recently. This is crucial for the optimizer's performance during and after the upgrade.

column OPERATION format a40
set linesize 200
select to_char(max(END_TIME),'DD-MON-YY hh24:mi') LATEST, OPERATION
from DBA_OPTSTAT_OPERATIONS
where OPERATION in ('gather_dictionary_stats','gather_fixed_objects_stats')
group by operation;

b. Stats on Clustered Indexes (If not using autoupgrade.jar)

If you are not using autoupgrade.jar (which typically handles this), it's recommended to gather statistics on critical SYS schema clustered indexes. This helps the optimizer in the new version.

exec dbms_stats.gather_schema_stats('SYS');
exec dbms_stats.gather_index_stats('SYS','I_OBJ#');
exec dbms_stats.gather_index_stats('SYS','I_FILE#_BLOCK#');
exec dbms_stats.gather_index_stats('SYS','I_TS#');
exec dbms_stats.gather_index_stats('SYS','I_USER#');
exec dbms_stats.gather_index_stats('SYS','I_TOID_VERSION#');
exec dbms_stats.gather_index_stats('SYS','I_MLOG#');
exec dbms_stats.gather_index_stats('SYS','I_RG#');

3. Post-Upgrade Actions

After the upgrade is complete, consider these immediate actions:

  • Adjust Stats History Retention:

    exec DBMS_STATS.ALTER_STATS_HISTORY_RETENTION(14);
    

    This sets the statistics history retention to 14 days.

  • Set Key Parameters in SPFILE:

    • _cursor_obsolete_threshold=1024

    • deferred_segment_creation=false

    • _sql_plan_directive_mgmt_control=0

    • Set optimizer_adaptive_statistics=FALSE explicitly in your SPFILE (It's recommended to explicitly set this to FALSE as adaptive statistics can sometimes lead to unexpected plan changes.)

4. Optimizer Parameters

  • COMPATIBLE and OPTIMIZER_FEATURES_ENABLE:

    • Ensure the COMPATIBLE parameter is set to the latest version (e.g., 19.0.0).

    • The OPTIMIZER_FEATURES_ENABLE parameter should also be set to the latest version ('19.1.0') to leverage the latest optimizer enhancements.

5. Performance Analysis and Tuning

a. Collect Execution Plans Before Upgrade

Capture existing execution plans to compare them after the upgrade and identify any regressions.

  • From Cursor Cache: Query V$SQL_PLAN or GV$SQL_PLAN for active and frequently executed SQL statements.

  • Using AWR: Analyze AWR reports for top SQL statements.

  • SQL Tuning Sets (STS): The most robust method. Create an STS from the AWR or cursor cache to capture SQL statements, their execution statistics, and execution plans.

    • This allows you to replay the workload later using SQL Performance Analyzer (SPA).

b. Compare AWR Snapshots

  • AWRDDRPT.sql: Use the AWRDDRPT.sql script (located in $ORACLE_HOME/rdbms/admin) to generate AWR Diff reports. This allows you to compare performance metrics between AWR snapshots taken before and after the upgrade.

  • Export AWR Data: You can export AWR data using the awrexp script (also in $ORACLE_HOME/rdbms/admin) to analyze it on a different database or for long-term storage.

c. SQL Tuning Sets (STS) and SQL Performance Analyzer (SPA)

  • Capture STS: Capture a representative workload into a SQL Tuning Set.

  • Load STS: Load this STS into the upgraded database.

  • SQL Performance Analyzer (SPA): Use SPA (part of Real Application Testing) to compare the performance of the SQL statements in the STS before and after the upgrade. SPA identifies SQL statements with plan changes or performance regressions.

d. SQL Plan Management (SPM)

SPM is a powerful feature to control and stabilize execution plans.

  • Configuration:

    • DBMS_SPM.CONFIGURE('PLAN_RETENTION_WEEKS', 5); (Default is 53 weeks)

    • DBMS_SPM.CONFIGURE('SPACE_BUDGET_PERCENT', 5); (Default is 10%)

  • Baseline Capture:

    • OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES = TRUE (Set this to start recording new plans as baselines. Remember to turn it off after capturing.)

  • Baseline Selection/Usage:

    • OPTIMIZER_USE_SQL_PLAN_BASELINES = TRUE (Ensures the optimizer uses existing baselines.)

    • OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES = FALSE (Turn off capture during normal operation.)

  • Evolution:

    • DBMS_SPM.REPORT_AUTO_EVOLVE_TASK: Reports on the automatic evolution task.

    • DBMS_SPM.CREATE_EVOLVE_TASK: Manually creates a task to evolve (verify and accept) new plans into baselines.

e. SQL Tuning Advisor (STA)

  • Utilize the SQL Tuning Advisor to analyze problematic SQL statements identified during post-upgrade testing. It can recommend various tuning actions, including new indexes, SQL profile creation, or SQL structure changes.

f. Export/Import STS to New DB

  • After capturing an STS from the source database, you can export it and import it into the target (upgraded) database for performance analysis.

  • DBMS_SPM.LOAD_PLANS_FROM_SQLSET: This procedure can be used to load plans from an STS into the SQL Plan Baseline (SPM) repository of the new database.

g. Workload Capture and Replay

  • Real Application Testing (RAT): This feature allows you to capture a real production workload from the source database and replay it on the upgraded database. This provides a highly accurate way to test the impact of the upgrade on performance.

    • SPA is a free feature, while Real Application Testing (which includes workload capture/replay) requires a separate license.

h. Automatic SPM (Exadata 19c)

  • On Exadata with Oracle 19c, Automatic SPM can further simplify SQL plan management by automatically managing baselines for frequently executed SQL.

By following these guidelines, you can significantly improve the success rate and performance stability of your Oracle 19c database upgrade.

-> Comptable (features) and optimiser_features_enable (use latest)  - Keep the latest

-> Collect execution plan before upgrade (cursor cache and AWR) [how to ?] [sql tunning sets]

-> compare AWR snapshots (AWRDDRPT.sql), You can export AWR data using the awrexp script in rdbms/admin

-> capture STS -> load STS (SQL performance analyser)

-> SPM  ( 53 week default - dbms_spm.configure('plan retention week',5) , (space_budget_percent',5)
   Baseline Capture -> optimiser_capture_sql_plan_baselines= TRUE (start recording ) 
   selection -> optimise_use_sql_plan_baselines= TRUE,OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE
   evolution -> dbms_spm.report_auto_evolve_task,  DBMS_SPM.CREATE_EVOLVE_TASK

-> SQL Tuning Advisor

-> export/import STS to new DB  

-> DBMS_SPM.LOAD_PLANS_FROM_SQLSET

-> capture workload, reply workload (compare)

-> SPA is Real Application Testing.SPM is a free feature

-> AUTOMATIC SPM - Exadata 19C 

Tuesday, December 3, 2019

How to upgrade DB from 12.1 to 19.3

Oracle Database 12.1 to 19.3 Upgrade Steps

This document provides a step-by-step guide for upgrading an Oracle Database from version 12.1 to 19.3. It outlines key actions, from software installation to post-upgrade tasks, to ensure a successful and efficient upgrade process.

Pre-Upgrade Preparations

  1. Install Oracle 19c Software:

    • Install the Oracle Database 19.3.0 software binaries into a new Oracle Home directory. Do not install it over your existing 12.1 Oracle Home. This new home will be referred to as 19.3_Home.

  2. Remove Obsolete init Parameters:

    • Review your current init.ora or SPFILE for any parameters that are no longer supported or are obsolete in Oracle 19c. Remove or adjust these parameters as necessary. Refer to Oracle documentation for a complete list of obsolete parameters.

  3. Stop the Listener:

    • Before proceeding with the database upgrade, stop the Oracle Listener associated with your 12.1 database.

    • lsnrctl stop

  4. Gather Dictionary and Fixed Objects Statistics:

    • It is critical to have up-to-date dictionary and fixed object statistics before starting the upgrade. This helps the upgrade process itself and ensures optimal performance post-upgrade.

    EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;
    EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
    
  5. Empty Recycle Bin:

    • Purge the recycle bin to avoid potential issues during the upgrade.

    PURGE DBA_RECYCLEBIN;
    
  6. Check and Update Time Zone File:

    • Verify the current time zone file version and update it to the latest version compatible with Oracle 19c if necessary. This is crucial for consistent time zone handling.

    • Refer to Oracle Support Note "Updating the Time Zone File and Timestamp with Time Zone Data in Oracle Database" for detailed instructions.

  7. Run Pre-Upgrade Information Tool:

    • Execute the preupgrade.jar tool from the 19c Oracle Home, pointing it to your 12.1 database. This tool performs a comprehensive analysis of your database for potential upgrade issues and generates fix-up scripts.

    (12.1_Home)/jdk/bin/java -jar (19.3_Home)/rdbms/admin/preupgrade.jar FILE TEXT DIR /home/oracle/upgrade
    
    • After running the tool, execute the generated fix-up script:

    @/home/oracle/upgrade/preupgrade_fixups.sql
    
    • Review the preupgrade.log and preupgrade_info.txt files for any remaining warnings or manual actions required.

Upgrade Execution

  1. Stop RAC Instances (if applicable):

    • If you are upgrading a Real Application Clusters (RAC) database, stop the second (and subsequent) RAC instances.

    • Disable the cluster on the first instance and then stop the first instance.

  2. Set Environment Variables for 19c:

    • Ensure your environment variables (especially ORACLE_HOME and PATH) are set to point to the new 19.3 Oracle Home.

  3. Start Database in Upgrade Mode:

    • Start the database from the 19.3 Oracle Home in upgrade mode.

    sqlplus / as sysdba
    startup upgrade
    
  4. Invoke Database Upgrade:

    • Execute the dbupgrade utility from the 19.3 Oracle Home. This command initiates the actual database upgrade process.

    (19.3_Home)/bin/dbupgrade
    
    • Monitor the output of this command closely for any errors or warnings.

Post-Upgrade Actions

  • After dbupgrade completes, the database will typically shut down.

  • Start the database in normal mode from the 19.3 Oracle Home.

  • Run the post-upgrade scripts and perform necessary post-upgrade checks as recommended by Oracle documentation (e.g., catupgrade.sql, utlrp.sql).

  • Re-enable cluster services and start all RAC instances if applicable.

  • Perform performance analysis and tuning as outlined in the "Oracle 19c Upgrade Checklist and Best Practices" document, including AWR comparisons, STS analysis, and SPM configuration.

Following these steps carefully will help ensure a successful upgrade of your Oracle 12.1 database to 19.3.

Wednesday, October 9, 2019

Wednesday, September 25, 2019

Oracle 19c Features and Enhancements

Oracle 19c Features and Enhancements

This document highlights several key features and enhancements introduced or improved in Oracle Database 19c, focusing on their functionality and practical implications.

1. Real-time Statistics (Exadata Only)

Oracle Database 19c introduces real-time statistics, extending online statistics gathering to include conventional DML statements. This feature allows statistics to be collected "on-the-fly" during INSERT, UPDATE, and MERGE operations.

  • Availability: This feature is only available on Exadata and Exadata Cloud Service.

  • Functionality: Statistics are gathered dynamically during conventional DML, providing the optimizer with up-to-date information for plan generation.

  • Parameters (Default TRUE):

    • _optimizer_gather_stats_on_conventional_dml: Controls whether real-time statistics are gathered.

    • _optimizer_use_stats_on_conventional_dml: Controls whether the optimizer uses real-time statistics.

    • _optimizer_stats_on_conventional_dml_sample_rate: Defaults to 100%, indicating the sampling rate for collection.

  • Disabling: You can set these parameters to FALSE to disable real-time statistics.

  • Impact on Dictionary Views:

    • USER_TABLES.NUM_ROWS: This column does not reflect real-time statistics changes.

    • USER_TAB_STATISTICS.NOTES: Will show STATS_ON_CONVENTIONAL_DML if real-time stats are active.

    • USER_TAB_COL_STATISTICS.NOTES: Will also show STATS_ON_CONVENTIONAL_DML for columns.

  • Limitations:

    • Direct Path INSERT ... SELECT: Real-time statistics have no effect.

    • DELETE operations: Real-time statistics have no effect.

    • Gathering statistics for a table (e.g., using DBMS_STATS) will wipe out the real-time statistics for that table.

2. High-Frequency Automatic Optimizer Statistics

This feature allows for more frequent, granular collection of optimizer statistics, improving the accuracy of execution plans.

  • Enabling/Configuring:

    EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_STATUS','ON'); -- Ensure auto tasks are on
    EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_INTERVAL','300'); -- Set interval to 300 seconds (5 minutes)
    
  • Interval: The minimum allowed value for AUTO_TASK_INTERVAL is 60 seconds, and the maximum is 900 seconds.

  • Monitoring: You can check the execution status of automatic statistics gathering tasks using:

    SELECT OPID, ORIGIN, STATUS, TO_CHAR(START_TIME, 'DD/MM HH24:MI:SS' ) AS BEGIN_TIME,
           TO_CHAR(END_TIME, 'DD/MM HH24:MI:SS') AS END_TIME, COMPLETED, FAILED,
           TIMED_OUT AS TIMEOUT, IN_PROGRESS AS INPROG
    FROM DBA_AUTO_STAT_EXECUTIONS
    ORDER BY OPID;
    

3. Validate SPFILE Parameters for Primary and Standby

Oracle 19c introduces a command to validate SPFILE parameters, which is particularly useful in Data Guard environments to ensure consistency between primary and standby databases.

  • Command:

    VALIDATE DATABASE {database-name} SPFILE;
    

    Replace {database-name} with the actual database name. This command checks for discrepancies or issues in the SPFILE.

4. Schema Only Accounts

Introduced earlier (from 9i onwards), schema-only accounts are a security best practice to restrict direct login access to schema owners.

  • Purpose:

    • Restrict Direct Access: Prevents users from directly logging in as the schema owner using shared credentials.

    • Proxy Connections: Users access the schema to perform DDL/DML changes via proxy connections, where a different user (with a password) connects and then proxies to the schema-only account.

    • Auditing: Allows for better auditing, as you can track which specific proxy user performed which tasks within the schema.

  • NO AUTHENTICATION Clause: When creating a schema-only account, the NO AUTHENTICATION clause is used. This allows a user to be created without a password, meaning direct connections are impossible, but proxy connections are enabled. This ensures that even if the schema owner account conceptually becomes "locked" (e.g., due to no password), proxy connections can still function.

5. Automatic Replication of Restore Points from Primary to Standby

In Oracle Data Guard, restore points can now be automatically replicated from the primary database to the standby database.

  • Mechanism: Restore points are replicated through the redo stream, which is then applied by the Managed Recovery Process (MRP) on the standby.

  • Primary Database State: For this replication to occur, the primary database must be in OPEN mode.

6. DML Operations on Active Data Guard Standby Databases

Oracle 19c enhances DML redirection capabilities for Active Data Guard, allowing DML operations to be performed on a read-only standby database by transparently redirecting them to the primary.

  • Enabling DML Redirect: This feature needs to be enabled on both the Primary and Standby databases.

    ALTER SYSTEM SET adg_redirect_dml=TRUE SCOPE=BOTH;
    
  • Connection Requirement: When connecting to the standby to perform DML that will be redirected, you must connect using a username/password. Connecting with sqlplus / as sysdba (OS authentication) will not work for DML redirection.

7. Inline External Table - EXTERNAL Clause (Zero DDL)

Oracle 19c introduces the ability to define external tables directly within a SQL query using the EXTERNAL clause, eliminating the need for separate DDL statements to create the external table object. This is often referred to as "Zero DDL" for external tables.

  • Concept: There is no need for an external table to be explicitly created as a database object. The definition is embedded directly in the SELECT statement.

  • Example: This example reads data from MY.txt (located in the MY_DIR directory), assuming it's a CSV file with three fields: object_id, owner, and object_name.

    SELECT *
    FROM EXTERNAL (
    (
    object_id NUMBER,
    owner VARCHAR2(128),
    object_name VARCHAR2(128)
    )
    TYPE oracle_loader
    DEFAULT DIRECTORY MY_DIR
    ACCESS PARAMETERS (
    RECORDS DELIMITED BY NEWLINE
    BADFILE MY_DIR
    LOGFILE MY_DIR:'inline_ext_tab_as_%a_%p.log'
    DISCARDFILE MY_DIR
    FIELDS CSV WITH EMBEDDED TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
    MISSING FIELD VALUES ARE NULL (
    object_id,
    owner,
    object_name)
    )
    LOCATION ('MY.txt')
    REJECT LIMIT UNLIMITED
    ) inline_ext_tab
    ORDER BY 1;
    
    • object_id NUMBER, owner VARCHAR2(128), object_name VARCHAR2(128): Defines the column structure of the external data.

    • TYPE oracle_loader: Specifies the access driver.

    • DEFAULT DIRECTORY MY_DIR: Specifies the database directory object where the external file is located.

    • ACCESS PARAMETERS (...): Defines how the data is parsed (e.g., RECORDS DELIMITED BY NEWLINE, FIELDS CSV).

    • LOCATION ('MY.txt'): Specifies the external data file.

    • REJECT LIMIT UNLIMITED: Allows all rows to be processed, even if some have errors.

    • inline_ext_tab: This is an alias for the inline external table definition.

Friday, May 31, 2019

CDB/PDB - 12.2, 18c, 19c new features

Oracle CDB/PDB Features and Management (12.2, 18c, 19c)

This document outlines key features and management operations for Container Databases (CDBs) and Pluggable Databases (PDBs) in Oracle Database versions 12.2, 18c, and 19c.

1. Creating a Container Database (CDB)

A CDB is the root container that hosts multiple PDBs.

CREATE DATABASE ... ENABLE PLUGGABLE DATABASE;

This command will create CDB$ROOT (the root container) and PDB$SEED (a template for creating new PDBs).

To verify the CDB status:

SELECT NAME, CDB, CON_ID FROM V$DATABASE;

2. Creating an Application Root (12.2, 18c, 19c)

An Application Root acts as a parent container for application PDBs, allowing for centralized management of common application data and metadata.

Connect to the CDB root before creating:

CONNECT TO the_cdb_root_user_as_sysdba; -- Example: CONNECT SYS/password@cdb_name AS SYSDBA;
CREATE PLUGGABLE DATABASE your_app_root_name AS APPLICATION CONTAINER;

3. Creating an Application Seed (12.2, 18c, 19c)

An Application Seed is a template PDB within an Application Root, used for creating new application PDBs.

Ensure the current container is the application root:

ALTER SESSION SET CONTAINER = your_app_root_name;
CREATE PLUGGABLE DATABASE AS SEED;

4. Creating a PDB from Seeds (12.2, 18c, 19c)

You can create new PDBs from PDB$SEED (in CDB$ROOT) or from an Application Seed (in an Application Root).

-- Connect to the CDB root
ALTER SESSION SET CONTAINER=CDB$ROOT;

-- Create a new PDB
CREATE PLUGGABLE DATABASE mypdbnew
  ADMIN USER MY_DBA IDENTIFIED BY "####"
  STORAGE (MAXSIZE UNLIMITED) -- Example storage clause
  DEFAULT TABLESPACE mypdbnew_tbs
  DATAFILE '/path/to/datafiles/mypdbnew01.dbf' SIZE 100M
  FILE_NAME_CONVERT =('/path/to/seed_datafiles/','/path/to/new_pdb_datafiles/'); -- Adjust paths
  -- Example: FILE_NAME_CONVERT =('/u01/app/oracle/oradata/CDB1/pdbseed/','/u01/app/oracle/oradata/CDB1/mypdbnew/');

-- Open the new PDB
ALTER PLUGGABLE DATABASE mypdbnew OPEN;

-- Verify user information in the CDB
SELECT con_id, username, default_tablespace, common FROM cdb_users WHERE con_id = (SELECT con_id FROM v$pdbs WHERE name = 'MYPDBNEW');

5. Moving PDBs / Plugging In a PDB (12.2, 18c, 19c)

This process involves unplugging a PDB from one CDB and plugging it into another, or into the same CDB.

-- Close the PDB to be unplugged
ALTER PLUGGABLE DATABASE mypdb CLOSE IMMEDIATE;

-- Unplug the PDB, creating an XML manifest file
ALTER PLUGGABLE DATABASE mypdb UNPLUG INTO '/backup_location/mypdb.xml';

This command creates mypdb.xml (the manifest file) and leaves the PDB's datafiles in their current location.

  • Move Files: Manually move the PDB's datafiles (e.g., system.dbf, sysaux.dbf, mypdb.dbf) and the mypdb.xml file to the new desired location.

  • Check Compatibility (in the target CDB):

    -- Connect to the target CDB root
    ALTER SESSION SET CONTAINER=CDB$ROOT;
    -- Run the check (no output means compatible, errors indicate issues)
    SELECT DBMS_PDB.CHECK_PLUG_COMPATIBILITY('/path/to/new_location/mypdb.xml') FROM DUAL;
    
  • Create PDB by Plugging In:

    CREATE PLUGGABLE DATABASE mypdb
      USING '/path/to/new_location/mypdb.xml'
      SOURCE_FILE_NAME_CONVERT =('/old/path/','/new/path/') -- Adjust paths as needed
      NOCOPY; -- Use NOCOPY if datafiles are already moved to the target location
    
  • Open the Plugged-In PDB:

    ALTER PLUGGABLE DATABASE mypdb OPEN;
    

    If in a RAC environment, ensure it's opened on all relevant nodes.

6. Cold Clone from Another PDB (12.2, 18c, 19c)

A cold clone creates a new PDB from an existing PDB while the source PDB is closed or in read-only mode.

-- Open source PDB in read-only mode (required for cold clone)
ALTER PLUGGABLE DATABASE mypdb OPEN READ ONLY FORCE;

-- Create the clone PDB
CREATE PLUGGABLE DATABASE mypdb_test
  FROM mypdb
  FILE_NAME_CONVERT =('/path/to/source_datafiles/','/path/to/clone_datafiles/'); -- Adjust paths

-- Open the cloned PDB
ALTER PLUGGABLE DATABASE mypdb_test OPEN;

(Note: For a Hot Clone, the source PDB does not need to be in read-only mode.)

7. Cold Clone from Non-CDB (12.2, 18c, 19c)

You can convert a non-CDB into a PDB and plug it into a CDB.

  1. Generate XML Manifest from Non-CDB:

    • Connect to the non-CDB as SYSDBA.

    EXEC DBMS_PDB.DESCRIBE(pdb_name => 'NONCDB_NAME', xml_file => '/path/to/noncdb.xml');
    
  2. Create PDB in CDB using XML:

    • Move the non-CDB's datafiles and the generated noncdb.xml file to the target CDB's desired location.

    • Connect to the target CDB root.

    CREATE PLUGGABLE DATABASE noncdb_pdb
      USING '/path/to/noncdb.xml'
      SOURCE_FILE_NAME_CONVERT =('/old/noncdb/datafiles/','/new/pdb/datafiles/')
      NOCOPY;
    
  3. Run noncdb_to_pdb.sql: After plugging in, this script must be run inside the new PDB to complete the conversion.

    ALTER PLUGGABLE DATABASE noncdb_pdb OPEN;
    ALTER SESSION SET CONTAINER = noncdb_pdb;
    @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
    

8. Dropping a Pluggable Database (12.2, 18c, 19c)

-- Close the PDB first
ALTER PLUGGABLE DATABASE mypdb CLOSE;

-- Drop the PDB and its associated datafiles
DROP PLUGGABLE DATABASE mypdb INCLUDING DATAFILES;

9. Hot Clone a Remote PDB or Non-CDB (12.2+)

This feature allows cloning a PDB or non-CDB from a remote database over a database link. The source must use local undo mode.

-- Create a database link to the remote source database
CREATE DATABASE LINK db_link CONNECT TO remote_user IDENTIFIED BY remote_user_password USING 'remotedb_service_name';

-- Create the new PDB as a clone from the remote source
CREATE PLUGGABLE DATABASE newpdb FROM remotedb@db_link;

-- Open the new PDB
ALTER PLUGGABLE DATABASE newpdb OPEN;

-- If cloning from a non-CDB, run the conversion script
-- ALTER SESSION SET CONTAINER = newpdb;
-- @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

10. Relocate PDB (12.2+)

Relocating a PDB moves it from one CDB to another while keeping the PDB online during most of the operation.

-- Create a database link to the source CDB
CREATE DATABASE LINK db_link CONNECT TO remote_user IDENTIFIED BY remote_user_password USING 'source_cdb_service_name';

-- Create the PDB in the target CDB and initiate relocation
CREATE PLUGGABLE DATABASE remote_pdb_name FROM remote_pdb_name@db_link RELOCATE;

-- Open the relocated PDB in the target CDB
ALTER PLUGGABLE DATABASE remote_pdb_name OPEN;

11. PDB Archive Files (12.2+)

PDB archive files (.pdb extension) are self-contained archives that include both the PDB's XML manifest and its datafiles. This simplifies PDB transport.

-- Unplug the PDB into an archive file
ALTER PLUGGABLE DATABASE mypdb UNPLUG INTO '/backup_location/mypdb.pdb';
```mypdb.pdb` is an archive containing both the `.xml` file and the datafiles.

```sql
-- Create a PDB from an archive file
CREATE PLUGGABLE DATABASE mypdb_test USING '/backup_location/mypdb.pdb';

12. PDB Refresh (12.2+)

Refreshable PDB clones allow a PDB to be periodically updated from a remote source PDB.

-- Create a database link to the source PDB
CREATE DATABASE LINK db_link CONNECT TO remote_user IDENTIFIED BY remote_user_password USING 'remote_pdb_service_name';

-- Create a refreshable PDB clone in manual refresh mode
CREATE PLUGGABLE DATABASE newpdb FROM remote_pdb_name@db_link REFRESH MODE MANUAL;

-- Open the refreshable PDB (initially read-only)
ALTER PLUGGABLE DATABASE newpdb READ ONLY;

-- To perform a manual refresh:
ALTER PLUGGABLE DATABASE newpdb CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE newpdb REFRESH;
ALTER PLUGGABLE DATABASE newpdb READ ONLY; -- PDB will be read-only after refresh

  • Auto-refresh:

    ALTER PLUGGABLE DATABASE newpdb REFRESH MODE EVERY 120 MINUTES;
    

    This auto-refresh only occurs if the PDB is closed.

13. Proxy PDB (12.2+)

A Proxy PDB acts as a pointer to a PDB in a remote CDB, allowing local access to a remote PDB without actually moving its datafiles.

  • Benefits:

    • Existing client connections unchanged: Clients can connect to the proxy PDB as if it were local.

    • Single entry point for cloud DB: Simplifies access to remote databases, especially in cloud environments.

    • Share an application root container: Enables sharing of an application root's content across multiple containers.

  • Example (CDB 1 instance):

    -- Create a database link to the remote CDB
    CREATE DATABASE LINK db_clone_link CONNECT TO c##remote_clone_user IDENTIFIED BY remote_clone_user_password USING 'CDB2_SERVICE_NAME';
    
    -- Create the proxy PDB
    CREATE PLUGGABLE DATABASE PDB2_PROXY AS PROXY FROM PDB2@db_clone_link;
    
    -- Verify the proxy PDB
    SELECT pdb_name, is_proxy_pdb, status FROM dba_pdbs WHERE pdb_name = 'PDB2_PROXY';
    

    No longer DB link & link user required for subsequent access after creation.

  • Accessing the Proxy PDB:

    ALTER SESSION SET CONTAINER=PDB2_PROXY;
    -- Now you are effectively connected to the remote PDB2 via the proxy.
    -- DDL and DML operations performed here will execute on the remote CDB2 instance.
    

14. Snapshot Carousel (PDB Archives) (18c+)

Snapshot Carousel provides a repository for periodic point-in-time copies (snapshots) of a PDB, enabling easy recovery or cloning to a specific point in time.

-- Create a PDB with snapshot mode enabled
CREATE PLUGGABLE DATABASE pdb_snap
  ADMIN USER MY_DBA IDENTIFIED BY "####"
  SNAPSHOT MODE EVERY 24 HOURS; -- Automatically creates a snapshot every 24 hours

-- Open the PDB
ALTER PLUGGABLE DATABASE pdb_snap OPEN;

-- MAX_PDB_SNAPSHOT can be changed between 0 to 8 (0 will delete existing snapshots)
ALTER PLUGGABLE DATABASE pdb_snap SET MAX_PDB_SNAPSHOTS = 5;

-- Manual snapshot creation
ALTER PLUGGABLE DATABASE pdb_snap SNAPSHOT xyz_snap;

-- Create a new PDB from one of the archives (snapshots)
CREATE PLUGGABLE DATABASE pdb_from_snap
  FROM pdb_snap
  USING SNAPSHOT xyz_snap;

15. Transportable Backups (18c+)

This feature supports using backups performed on a PDB before it is unplugged and plugged into a new container. This significantly streamlines PDB relocation for purposes like load balancing or migration between on-premises and cloud, as it avoids the need for new backups immediately before and after each PDB move.

16. Switchover Refreshable Clone PDB between CDBs (Migration) (18c+)

This allows for a planned or unplanned switchover of a refreshable clone PDB between different CDBs, facilitating PDB migration with minimal downtime.

  • Planned Switchover:

    ALTER PLUGGABLE DATABASE your_pdb_name REFRESH MODE EVERY 2 MINUTES FROM remote_pdb_name@dblink SWITCHOVER;
    

    This command prepares the PDB for a switchover, automatically refreshing it.

  • Unplanned Switchover (after a planned switchover setup):

    ALTER PLUGGABLE DATABASE your_pdb_name REFRESH; -- Perform a final refresh
    ALTER PLUGGABLE DATABASE your_pdb_name REFRESH MODE NONE; -- Disable refresh mode
    ALTER PLUGGABLE DATABASE your_pdb_name OPEN READ WRITE; -- Open the PDB in read-write mode
    

17. Transient No-Standby PDBs (Clone) (18c+)

This feature allows creating a hot clone of a PDB without it being replicated to a standby database, useful for temporary testing or development environments.

  • Hot clone to transient PDB:

    CREATE PLUGGABLE DATABASE transient_pdb FROM source_pdb CLONE STANDBY=NONE;
    
  • Cold clone of this transient PDB with standby: You can then create a cold clone of this transient PDB that does include standby replication.

  • Drop transient PDB:

    DROP PLUGGABLE DATABASE transient_pdb INCLUDING DATAFILES;
    

18. AWR for Pluggable Database (12.2 onwards)

Starting from Oracle 12.2, AWR (Automatic Workload Repository) data can be collected at the PDB level, providing granular performance insights for individual PDBs.

  • Enable AWR Auto Flush for PDBs:

    ALTER SYSTEM SET awr_pdb_autoflush_enabled=TRUE;
    
  • View PDB Snapshots:

    SELECT * FROM awr_pdb_snapshot;