Monday, July 12, 2021

Sunday, October 11, 2020

Application Continuity for the Oracle Database

Draining and Rebalancing Sessions for Planned Maintenance

  • Draining of sessions is in wide use with Oracle connection pools and mid-tiers configured for Fast Application Notification (FAN).
  • Starting with Oracle Database 18c, the database itself drains sessions when PDBs and instances are stopped or relocated.
  • Failover solutions such as Application Continuity are the fallback when work will not drain in the time allocated.

Transparent Application Failover (TAF)

  • TAF creates a new session and, when using SELECT mode, on demand, replays queries back to where they were before the failure occurred
  • 12.2.0.1 -> FAILOVER_RESTORE -> Level1 (Default None) -> automatically restores the common initial state before replaying a request
  • 18c onwards -> FAILOVER_TYPE= AUTO will set to FAILOVER_RESTORE = AUTO which is same as Level1

Application Continuity (AC)

  • Application Continuity hides outages starting with Oracle database 12.1 for thin Java-based applications and Oracle Database 12.2.0.1

for OCI and ODP.NET based applications

  • Application Continuity rebuilds the session by recovering the session from a known point which includes session states and transactional states

Transparent Application Continuity (TAC) -> 18C onwards

  • transparently tracks and records the session and transactional state so the database session can be recovered following recoverable outages

https://www.oracle.com/technetwork/database/options/clustering/applicationcontinuity/applicationcontinuityformaa-6348196.pdf

Monday, August 31, 2020

All about oracle-database-preinstall-19c (Oracle Linux 8, 7)

How to download?

  • OEL8 -https://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-1.el8.x86_64.rpm
  • OEL7 -https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-2.el7.x86_64.rpm
  • OEL 7 (old) -https://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64/getPackage/oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm

How to install?

rpm -qa oracle-database-preinstall-19c-1.0-1.el8.x86_64.rpm

What it will do?

  • Create an oracle user
  • Create groups
  • Update kernel parameters


What will happen if existing oracle user and group exits?

  • It will ignore oracle user creation
  • It will add additional groups at the end of /etc/group
  • It will replace existing kernel parameters with recommended values 

Thursday, August 27, 2020

Send email with multiple attachment using powershell

# This script will take take input value and send email with attachments if matches names with input values
# Load SMO extension
[void][Reflection.Assembly]::LoadWithPartialName('Microsoft.VisualBasic')
$title = 'Email My Scripts'
$msg = 'Enter your INPUT :'
$INPUTVALUEvers = [Microsoft.VisualBasic.Interaction]::InputBox($msg, $title)
$INPUTVALUE = $INPUTVALUEvers -replace '\\','_'

###########Define Variables########
$fromaddress = "xyz@gmail.com"
$toaddress = "xyz@gmail.com"
$Subject = "My report $INPUTVALUEvers"
$body = "My reports"
$smtpserver = "my.com.au"

write-host $INPUTVALUE

[array]$attachments =Get-ChildItem -Path "\\My_path\My\*" -Include 
MyReport_$INPUTVALUE.html,Mylogon_$INPUTVALUE.txt

$Msg = @{
to = $toaddress
from = $fromaddress
Body = $body
subject = $Subject
smtpserver = $smtpserver
BodyAsHtml = $True
Attachments = $attachments.fullname
}
Send-MailMessage @Msg

Monday, August 24, 2020

Cloud backup - oracle - Dell - DDVE

Dell Data Domain Virtual Edition (DDVE): DB workload

Dell Cloud Snapshot Manager (CSM): App workload

Data Domain Management Center (DDMC): Manage multiple DDVE

Power Protect Management Center (PPDM): data protection, backup scheduling, deduplication, operational agility, self-service, and IT governance.


  • app-optimized-compression   oracle1
  • File per set should not be specified to avoid the performance bottleneck.

Tuesday, July 28, 2020

All about Logical Volume Management (LVM) (PV, VG,LV)

 Logical Volume Management (LVM) creates a layer of abstraction over physical storage, allowing you to create logical storage volumes. With LVM in place, you are not bothered with physical disk sizes because the hardware storage is hidden from the software so it can be resized and moved without stopping applications or unmounting file systems.



Physical Volume (PV): it is a whole disk or a partition of a disk

Volume Group (VG): corresponds to one or more PV

Logical Volume (LV): represents a portion of a VG. A LV can only belong to one VG. It’s on a LV that we can create a file system.


pvs(pvdisplay) pvscan

===

  PV         VG                 Fmt  Attr PSize    PFree

  /dev/sda2  osvg               lvm2 a--   <59.00g      0

  /dev/sdb   oracle_application lvm2 a--  <330.00g <25.00g


vgs (vgdisplay) vgscan

===

  VG                 #PV #LV #SN Attr   VSize    VFree

  oracle_application   1   4   0 wz--n- <330.00g <25.00g

  osvg                 1   5   0 wz--n-  <59.00g      0


lvs (lvdisplay) lvscan

====

  LV          VG                 Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert

  app         oracle_application -wi-ao----   5.00g

  gridinfra   oracle_application -wi-ao---- 100.00g

  oracle      oracle_application -wi-ao---- 100.00g

  orasoftware oracle_application -wi-ao---- 100.00g

  homelv      osvg               -wi-ao----   4.00g

  rootlv      osvg               -wi-ao----  19.00g

  swaplv      osvg               -wi-ao---- <16.00g

  tmplv       osvg               -wi-ao----  12.00g

  varlv       osvg               -wi-ao----   8.00g




pvcreate /dev/sdb (pvcreate /dev/sdb /dev/sdc)

vgcreate oracle_application /dev/sdb (vgcreate oracle_application /dev/sdb /dev/sdc)

lvcreate -L 5G -n app oracle_application


 

Thursday, July 16, 2020

How to find latest Patch in oracle

Critical Patch Update (CPU) Program Jul 2020 Patch Availability Document (PAD) (Doc ID 2664876.1)

Wednesday, December 18, 2019

Things to Know when you do 19c upgrade


-> autoupgrade.jar recommended way to upgrade
-> dictionary stats have been gathered the last time

column OPERATION format a40
set linesize 200
select to_char(max(END_TIME),'DD-MON-YY hh24:mi') LATEST, OPERATION from DBA_OPTSTAT_OPERATIONS where OPERATION in ('gather_dictionary_stats','gather_fixed_objects_stats') group by operation;


-> stats on clustered indexes ( In case you are not using autoupgrade.jar)

exec dbms_stats.gather_schema_stats('SYS');
exec dbms_stats.gather_index_stats('SYS','I_OBJ#');
exec dbms_stats.gather_index_stats('SYS','I_FILE#_BLOCK#');
exec dbms_stats.gather_index_stats('SYS','I_TS#');
exec dbms_stats.gather_index_stats('SYS','I_USER#');
exec dbms_stats.gather_index_stats('SYS','I_TOID_VERSION#');
exec dbms_stats.gather_index_stats('SYS','I_MLOG#');
exec dbms_stats.gather_index_stats('SYS','I_RG#');

-> Things to do right after upgrade -> 
exec DBMS_STATS.ALTER_STATS_HISTORY_RETENTION(14);
_cursor_obsolete_threshold=1024
deferred_segment_creation=false
_sql_plan_directive_mgmt_control=0
Set optimizer_adaptive_statistics=FALSE explicitly in your SPFILE

-> Comptable (features) and optimiser_features_enable (use latest)  - Keep the latest

-> Collect execution plan before upgrade (cursor cache and AWR) [how to ?] [sql tunning sets]

-> compare AWR snapshots (AWRDDRPT.sql), You can export AWR data using the awrexp script in rdbms/admin

-> capture STS -> load STS (SQL performance analyser)

-> SPM  ( 53 week default - dbms_spm.configure('plan retention week',5) , (space_budget_percent',5)
   Baseline Capture -> optimiser_capture_sql_plan_baselines= TRUE (start recording ) 
   selection -> optimise_use_sql_plan_baselines= TRUE,OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE
   evolution -> dbms_spm.report_auto_evolve_task,  DBMS_SPM.CREATE_EVOLVE_TASK

-> SQL Tuning Advisor

-> export/import STS to new DB  

-> DBMS_SPM.LOAD_PLANS_FROM_SQLSET

-> capture workload, reply workload (compare)

-> SPA is Real Application Testing.SPM is a free feature

-> AUTOMATIC SPM - Exadata 19C 

Tuesday, December 3, 2019

How to upgrade DB from 12.1 to 19.3

1. Install oracle software for the database on Oracle Home 19.3.0
2. Remove obsoleted init parameters and others
3. Stop the listener
4. Gather stats for dictionary and fixed objects
EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;
EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
5. Empty recycle bin - purge dba_recyclebin;
6. Check whether we are using the right Time zone file. Need to update the latest time zone file.
7. Run Pre-Upgrade Information Tool
(12.1 Home)/jdk/bin/java -jar (19.3 Home)/rdbms/admin/preupgrade.jar FILE TEXT DIR /home/oracle/upgrade
execute preupgrade_fixups.sql
8. Stop 2nd RAC instance and Disable cluster and stop 1st instance
9. set the env variable for 19C
10. Start DB in upgrade mode - startup upgrade
11. invoke DB upgrade - (19.3 Home)/bin/dbupgrade

Wednesday, October 9, 2019

Wednesday, September 25, 2019

Oracle 19C new features

Real-time Statistics In 19c (Only Exadata)

This feature is only available on Exadata and Exadata Cloud Service.

Oracle Database 19c introduces real-time statistics, which extends online statistics gathering to also include conventional DML statements. Statistics can now be collected 'on-the-fly' during conventional DML operation

Oracle introduced new parameters

"_optimizer_gather_stats_on_conventional_dml" and "_optimizer_use_stats_on_conventional_dml" which are true by default
"_optimizer_stats_on_conventional_dml_sample_rate" at 100%

By default the "_optimizer_gather_stats_on_conventional_dml" is true so the real-time stats automatically kicks off. Can set these parameters to "FALSE" to disable the same. 


USER_TABLES -> NUM_ROWS (no changes)
USER_TAB_STATISTICS -> notes column -> STATS_ON_CONVENTIONAL_DML
USER_TAB_COL_STATISTICS -> notes column -> STATS_ON_CONVENTIONAL_DML

Direct Path INSERT ... SELECT -> No effect
Delete -> No effect

Gathering statistics for a table will wipe out the real-time statistic

High-Frequency Automatic Optimizer Statistics


EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_STATUS','ON’);

EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_INTERVAL','300');

Min value allowed is 60 and max is 900 (seconds).


SELECT OPID, ORIGIN, STATUS, TO_CHAR(START_TIME, 'DD/MM HH24:MI:SS' ) AS BEGIN_TIME,

       TO_CHAR(END_TIME, 'DD/MM HH24:MI:SS') AS END_TIME, COMPLETED, FAILED,

       TIMED_OUT AS TIMEOUT, IN_PROGRESS AS INPROG

FROM  DBA_AUTO_STAT_EXECUTIONS

ORDER BY OPID;



Validate spfile parameters for Primary and Standby


VALIDATE DATABASE {database-name} SPFILE



Schema Only Accounts

It's a common practice to restrict the direct access to a schema owner, preventing people from accessing it using shared credentials. Instead, they access it to do schema changes via proxy connections, allowing you to audit which users performed which tasks. (9i onwards)

NO AUTHENTICATION clause help to connect user even schema owner get locked.


Automatic replication of restore points from Primary to standby

Restore point is replicated through the redo (MRP process) so primary should be in open mode


DML Operations on Active Data Guard Standby Databases

Enable dml redirect on Primary and Standby , run this on both.

alter system set adg_redirect_dml=true scope=both;


we need to connect using username/password." / as sysdba" will not work

Inline External Table - inline_ext_tab (Zero DDL)

There is no need for an external table to be explicitly created.
 
In below example, MY.txt is an external file which has 3 fields.


SELECT *

FROM EXTERNAL (

(

object_id NUMBER,

owner VARCHAR2(128),

object_name VARCHAR2(128)

)

TYPE oracle_loader

DEFAULT DIRECTORY MY_DIR

ACCESS PARAMETERS (

RECORDS DELIMITED BY NEWLINE

BADFILE MY_DIR

LOGFILE MY_DIR:'inline_ext_tab_as_%a_%p.log'

DISCARDFILE MY_DIR

FIELDS CSV WITH EMBEDDED TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'

MISSING FIELD VALUES ARE NULL (

object_id,

owner,

object_name)

)

LOCATION ('MY.txt')

REJECT LIMIT UNLIMITED

                      ) inline_ext_tab

ORDER BY 1;

Wednesday, July 31, 2019

Udev rules - SYMLINK - Device Persistence - Oracle ASM - Linux

Note : This is just an example for one of the disk


1) Present Raw disks

2) How to get scsi id

/lib/udev/scsi_id -g -u -d /dev/sda

3600224800cbc991b76c2a957f833fc66

3) Update /etc/udev/rules.d/99-asm.rules

KERNEL=="sd*",SUBSYSTEM=="block",ENV{DEVTYPE}=="disk",ENV{ID_SERIAL}=="3600224800cbc991b76c2a957f833fc66",SYMLINK+="asmdatadisk1",OWNER="grid",GROUP="asmadmin",MODE="0660"

###Below might not work for cloud DB's

#ACTION=="add|change", ENV{ID_SCSI_SERIAL}=="3600224800cbc991b76c2a957f833fc66", SYMLINK+="asmdatadisk1",  OWNER="grid", GROUP="asmadmin", MODE="0660"

4) Reload udev rules without reboot

udevadm control --reload-rules && udevadm trigger

Tuesday, June 25, 2019

Flashback Data Archive (Oracle Total Recall)

With Oracle 11g, FDA had to be managed at table level. Starting with Oracle 12c, it is possible to create logical groups of tables for an application: Enabling or disabling of FDA is performed at the application level and it prevents the listing of all application tables to enable or disable FDA for each one.


11G

===

STEP 1: Create a tablespace for FDA

CREATE tablespace FDA datafile '+DATA_DG' SIZE 100M AUTOEXTEND ON NEXT 100M MAXSIZE 30G;

STEP 2: Create flashback archive

CREATE FLASHBACK ARCHIVE DEFAULT TABLESPACE FDA QUOTA 10G RETENTION YEAR;

STEP 3: Enable FDA on respective table

ALTER TABLE FLASHBACK ARCHIVE ;

STEP 4: Check FDA is enabled on respective table

SELECT owner_name,table_name,flashback_archive_name,archive_table_name,status FROM dba_flashback_archive_tables ORDER BY owner_name, table_name;


12C

===

STEP 1: Create a tablespace for FDA

CREATE tablespace FDA datafile '+DATA_DG' SIZE 100M AUTOEXTEND ON NEXT 100M MAXSIZE 30G;

STEP 2: Create flashback archive

CREATE FLASHBACK ARCHIVE TABLESPACE FDA QUOTA 10G RETENTION YEAR;
grant FLASHBACK ARCHIVE on FDA_NAME to XYZ;
grant FLASHBACK ARCHIVE ADMINISTRATOR to XYZ;


Step 3 : Enable FDA on respective table

ALTER TABLE TEST FLASHBACK ARCHIVE FDA_NAME;

Step 4 : To make sure the context information is stored with the transaction data, we need to use the DBMS_FLASHBACK_ARCHIVE.SET_CONTEXT_LEVEL procedure

begin
dbms_flashback_archive.set_context_level(
level='>'ALL');
end;
/



Create a group and add a table

============================

dbms_flashback_archive.register_application(application_name='>'FDA_APP',flashback_archive_name=">";'FDA_NAME');
exec dbms_flashback_archive.add_table_to_application(application_name=">"'FDA_APP',table_name=">";'TEST',schema_name=">";'XYZ');



Enable/Disable group

============================
exec dbms_flashback_archive.disable_application(application_name='>'FDA_APP');
exec dbms_flashback_archive.enable_application(application_name='>'FDA_APP');

Friday, May 31, 2019

CDB/PDB - 12.2, 18c, 19c new features

Create CDB
==========
CREATE DATABASE ... ENABLE PLUGGABLE DATABASE;
It will create CDB$ROOT and PDB$SEED
SELECT NAME, CDB, CON_ID FROM V$DATABASE;

Create Application root
=======================
connect to the CDB root
CREATE PLUGGABLE DATABASE .. AS APPLICATION CONTAINER;

Create Application seed
=======================
make sure that the current container is the application root
CREATE PLUGGABLE DATABASE AS SEED

Create a PDB from Seeds
=========================

- Alter session set container=cdb$root;
- Create pluggable database mypdbnew ADMIN user MY_DBA identified by #### storage (...) default tablespace mypdbnew_tbs datafile '..' size 100M   file_name_convert =('...','...');
- Alter pluggable database mypdbnew open;
- select con_id,username,default_tablespace,common from cdb_users;

Move PDB/Plugging In a PDB
==========================
- Alter pluggable database mypdb close immediate;
- alter pluggable database mypdb unplug into '/backup_location/';

It will create mypdb.xml file
Move files (system.dbf,sysaux.dbf,mypdb.pdf,mypdb.xml) to a new location
- check compitability 
- create pluggable database mypdb using '/../mypdb.xml' SOURCE_FILE_NAME_CONVERT=('..','..') NOCOPY;
- Alter pluggable database mypdb open; (if RAC -> on both node)

Cold clone from another PDB's
========================
 - Alter pluggable database mypdb open read only force;  [Hot clone - we don't require this step]
 - Create pluggable database mypdb_test from mypdb file_name_convert =('...','...');
 - Alter pluggable database mypdb_test open;

Cold clone from non-CDB's
========================
- DBMS_PDB.DESCRIBE function to generate XML
- Create pluggable database 
- run noncdb_to_pdb.sql

Drop pluggable database
========================
- Alter pluggable database mypdb close;
- drop pluggable database mypdb including datafiles;

Hot Clone a Remote PDB or Non-CDB (12.2) 
===========================
-- Source must use local undo mode

- CREATE DATABASE LINK db_link CONNECT TO remote_user IDENTIFIED BY remote_user USING 'remotedb';
- CREATE PLUGGABLE DATABASE newpdb FROM 
remotedb@db_link;
- Alter pluggable database newpdb open;
- run noncdb_to_pdb.sql (If it's from non-cdb)

Relocate PDB (12.2)
=====================
- CREATE DATABASE LINK db_link CONNECT TO remote_user IDENTIFIED BY remote_user USING 'remotedb';
- CREATE PLUGGABLE DATABASE 
remotedb FROM remotedb@db_link relocate;
- Alter pluggable database remotedb open;

PDB archive files (12.2)
===============

alter pluggable database mypdb unplug into '/backup_location/mypdb.pdb';
mypdb.pdb is an archive and it contains .xml file and datafiles.
Create pluggable database mypdb_test using '/backup_location/mypdb.pdb';


PDB refresh (12.2)
===============

- CREATE DATABASE LINK db_link CONNECT TO remote_user IDENTIFIED BY remote_user USING 'remotedb';
- CREATE PLUGGABLE DATABASE newpdb FROM 
remotedb@db_link refresh mode manual;
- Alter pluggable database newpdb read only;
Alter pluggable database newpdb close immediate;
- Alter pluggable database refresh;
Alter pluggable database newpdb read only;

Auto-refresh -> - Alter PLUGGABLE DATABASE newpdb refresh mode every 120 minutes; (If only PDB is closed)


PROXY PDB (12.2)
===============

Benefits:
- Existing client connection unchanged
- A single entry point for could DB
- Share an application root container between multiple containers

-- CDB 1 instance 

- CREATE DATABASE LINK db_clone_link CONNECT TO c##remote_clone_user IDENTIFIED BY remote_clone_user USING 'CDB2'; 
- CREATE PLUGGABLE DATABASE PDB2_PROXY as PROXY FROM PDB2@db_clone_link
- Select pdb_name,is_proxy_pdb,status from dba_pdbs;

--No longer DB link & link user required

- alter session set container=PDB2_proxy;

-- it will show CDB2 instance
-- create table XYZ (DDL & DML)

-- CDB 3 instance 

- alter session set container=PDB2;

-- it will show the XYZ table


Snapshot carousel (PDB archives) (18C)
================================

It is a repository for periodic point-in-time copies of a PDB.

- Create pluggable database pdb_snap ADMIN user MY_DBA identified by #### snapshot mode every 24 hours;
- Alter pluggable database pdb_snap open;

-- MAX_PDB_SNAPSHOT can be changed between 0 to 8 (0 will delete existing snapshot)

- Alter pluggable database snapshot xyz_snap; (manual snap creation)

-- new PDB from the archives

- Create pluggable database pdb_from_snap  from pdb_snap using snapshot xyz_snap;


Transportable Backups (18C)
==================

support the use of backups performed on a PDB prior to it being unplugged and plugged into a new container. This facilitates agile relocation of PDBs between CDBs, perhaps for load balancing or migration between on-premises and cloud, without requiring backups to be taken immediately before, and after, each PDB relocation

Switchover Refreshable clone PDB between CDB's (migration) (18C)
========================================  

   Planned :
          alter pluggable database XXXX refresh mode every 2 minutes from remotedb@dblink switchover;
  Unplanned:
         alter pluggable database XXXX refresh; 
         alter pluggable database XXXX refresh mode none;
         alter pluggable database XXXX open read-write;

Transient no-standby PDB's (clone) (18C)
=========================    

          Hot clone to transient PDB using standby=none parameter
          Cold clone of this transient PDB with standby
        drop transient PDB

12.2 onwards AWR for Pluggable Database
==================================
alter system set awr_pdb_autoflush_enabled=true;
select * from awr_pdb_snapshot;

Wednesday, May 29, 2019

Avamar rman backup generates large amount of trace files

1. Avamar team needs to set 'TRACE 0' from GUI.

2. DBA needs to set the following event : (need DB restarts to take effect)

Alter system set event='trace[krb.*] disk disable, memory disable' scope=spfile sid='*';

or  temporary fix on the live instance without db restart :

Alter system set event 'trace[krb.*] disk disable, memory disable' ;

Sunday, March 24, 2019

TDE (Transparent Data Encryption)

Steps to implement TDE on Oracle 12c standalone DB.

1. Make Wallet directory

mkdir -p /test/WALLET

2. Modify sqlnet.ora

in "/network/admin"

ENCRYPTION_WALLET_LOCATION =
  (SOURCE =(METHOD = FILE)(METHOD_DATA =
    (DIRECTORY = /test/WALLET)))

copy this sqlnet.ora to /network/admin"

3. 
Create Keystore using mkstore (optional)

ORACLE_HOME/oracle_common/bin/mkstore -wrl <wallet_location> -createCredential <db_connect_string> <username> 

3. Create Keystore (12c specific, syntax diff for previous versions)

ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/test/WALLET/' IDENTIFIED BY ######;

Check if following file is created

$ ls -ltr /test/WALLET/
total 4
-rw------- 1 oracle asmadmin 2555 Mar 28 09:34 ewallet.p01


4. open Keystore

ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY ######;

Make sure you have copied sqlnet.ora from grid home to oracle db home

5. Now activate the key

SET LINESIZE 100
SELECT con_id, key_id FROM v$encryption_keys;SQL> 

no rows selected

SQL> ADMINISTER KEY MANAGEMENT SET KEY IDENTIFIED BY ###### WITH BACKUP;

keystore altered.

SQL> SELECT con_id, key_id FROM v$encryption_keys;

    CON_ID KEY_ID
---------- ------------------------------------------------------------------------------
         0 TTTTTTTTTTTTTTTTGfeeeeeeeeeeeeeeee

SQL>


6. Enable Autologin

column WRL_PARAMETER format a30
column WRL_TYPE format a10
column STATUS format a10


SQL> SELECT * FROM v$encryption_wallet;

WRL_TYPE   WRL_PARAMETER                  STATUS     WALLET_TYPE          WALLET_OR FULLY_BAC     CON_ID
---------- ------------------------------ ---------- -------------------- --------- --------- ----------
FILE       /test/WALLET/   OPEN       PASSWORD             SINGLE    NO                 0

SQL>


We can see that wallet type is "PASSWORD". That means everytime we restart database, we need to open wallet with password.

To avoid it, we can enable Autologin

ADMINISTER KEY MANAGEMENT CREATE AUTO_LOGIN KEYSTORE FROM KEYSTORE '/test/WALLET/' IDENTIFIED BY ######;

Then restart database and check again. This time WALLET_TYPE should be AUTOLOGIN.

You should see 2 files in wallet location as below

-rw------- 1 oracle asmadmin 3995 Mar 28 09:41 ewallet.p01
-rw------- 1 oracle asmadmin 4040 Mar 29 12:16 cwallet.sso

If we check now, WALLET_TYPE will be AUTOLOGIN


WRL_TYPE   WRL_PARAMETER                  STATUS     WALLET_TYPE          WALLET_OR FULLY_BAC     CON_ID
---------- ------------------------------ ---------- -------------------- --------- --------- ----------
FILE       /test/WALLET/   OPEN       AUTOLOGIN            SINGLE    NO                 0




7. Create encrypted tablespace

CREATE TABLESPACE TDE_TESTDATAFILE 
  '+DATA_DG' SIZE 200M AUTOEXTEND ON NEXT 100M MAXSIZE 8192M
LOGGING
ONLINE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO
ENCRYPTION USING 'AES256'
DEFAULT STORAGE(ENCRYPT)
FLASHBACK ON;

Now you can move tables/indexes to this tablespace as usual.

Thursday, February 28, 2019

All you want to know about CDBs , PDBs and multitenant databases


Why should We use multitenant databases? 

  • Rapid provisioning (Via clones)
  • Online PDB relocate (using DB link)
  • Sharing background process
  • Sharing SGA
  • Minimize Capex and Opex 
  • Single Backup for CDB
  • Automatically Standby
  • Perform rapid upgrades
  • Reference data in a different CDB
  • Isolate grants within PDBs
  • Refreshable clone - read-only clone that can periodically synchronize with its source PDB
  • Snapshot copy PDB - this PDB cannot be unplugged from the CDB root or application root


  • CDB -> container database -> DBA point of view 
  • PDB -> pluggable database -> Application point of view 
  • Many PDBs can be plugged into a single Multitenant CDB.

  • The system container includes the root CDB and all PDBs in the CDB
  • An application container consists of exactly one application root, and the PDBs plugged into this root.
  • If a PDB belongs to an application container, then it is an application PDB. An application seed is an optional application PDB.

  • A CDB administrator manages the CDB itself.
  • An application container administrator manages the application container
  • An application PDB administrator manages the PDB's within the application container
  • A PDB administrator manages PDB outside the container 

How Hot clones works? 
  • Apply redo to catchup the source and apply undo to rollback uncommitted transactions.        

Seed -> system,sysaux,temp,undo


CDB$ROOT (oracle system Metadata only - Exactly one) -> PDB$SEED (template - cannot drop), PDB's 

- SYS is a common user in the CDB. Every PDB is owned by SYS.
- By default, a user connected to one PDB must use database links to access objects in a different PDB
To open the PDB, you must start the respective service. 
- To stop the PDB, ALTER PLUGGABLE DATABASE PDB_NAME CLOSE IMMEDIATE;

  

COL NAME FORMAT A15
show conn_name
SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;
select con_id,username,default_tablespace,common from cdb_users;
Select PDB_NAME from DBA_PDBS;


PDB/CDB new features (12.2, 18c, 19c) 





Wednesday, February 27, 2019

Oracle architecture Diagram [ Multitenant and Single tenant]


Oracle 12C :

Single tenant Architecture Diagram :
https://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/poster/OUTPUT_poster/pdf/Database%20Architecture.pdf



Multitenant Architecture Diagram :
https://www.oracle.com/webfolder/technetwork/tutorials/obe/db/12c/r1/poster/OUTPUT_poster/pdf/Multitenant%20Architecture.pdf

Friday, January 4, 2019

Recreate Lob table or big table in PARALLEL


Source table : APP_USER.APP_TABLE1
Target table : APP_USER.APP_TABLE2

Step 1: create a sequence Myuser.T_SQ [ Just to log a timing]

Step 2: create a procedure

CREATE OR REPLACE PROCEDURE Myuser.my_proc as
v_seq number;
Begin
execute immediate 'Alter session set parallel_force_local=TRUE';
execute immediate 'Alter session enable parallel dml';
select Myuser.T_SQ.nextval into v_seq from dual;
insert into Myuser.job_time (t_id,t_name,t_type,t_time) values(v_seq,'APP_TABLE1','START',sysdate);
commit;
INSERT /*+ APPEND PARALLEL(A,60) */ into APP_USER.APP_TABLE2 A
select /*+ PARALLEL(B,60) */ * from APP_USER.APP_TABLE1 B ;
commit;
insert into Myuser.job_time (t_id,t_name,t_type,t_time) values(v_seq,'APP_TABLE1','END',sysdate);
commit;
End;
/

Step 3: Execute procedure




Tuesday, January 1, 2019

Good motivational books to read 2019

The Monk Who Sold His Ferrari - Author: Robin Sharma

No Excuses: The Power of Self-Discipline - Author: Brian Tracy‎

Wednesday, November 28, 2018

query execution taking long time in oracle (parse bug)

Issue Description:
-------------------------
In our environment query was taking random time from 3s to 50s.
After further research, we have found that query parsing was taking a long time due to parsing.
Due to bad application coding query has to hard parse most of the time.

Resolution:
---------------------
The issue was related to parsing bug and below are the workaround.

alter system set "_fix_control"='13836796:OFF' scope=both sid='*';
alter system set "_optimizer_cost_based_transformation"=off scope=both sid='*';

Note: You should avoid setting hidden parameters without oracle consultation

Sunday, September 30, 2018

Parallel DDL and DML (create table as select)

For DDL :

create table APP_SCHEMA2.APP_TABLE2  parallel 15 as select  /*+ PARALLEL(A,15) */ * from  APP_SCHEMA1.APP_TABLE1 A ;

For DML :

Alter session enable parallel dml;

INSERT /*+ APPEND PARALLEL(A,60) */ into APP_USER2.APP_TABLE2 A
select /*+ PARALLEL(B,60) */ * from APP_USER1.APP_TABLE1 B ;

Tuesday, September 11, 2018

Best way to make make expdp/impdp faster

 EXPDP
============
=> Give more parallel as per allocated CPU
=> Exclude stats 
IMPDP
===========
=> create SQL file for Triggers and Indexes and modify parallel 1 to "parallel *" (as per your CPU)
=> Import metadata only (exclude index,stats)
=> Generate script to disable PK,FK constraints ( Do not generate script for check constraint)
=> Disable PK,FK constraints ( Do not disable check constraint)
=> import data_only ( parallel depends on CPU and dump files)
=> Create index and trigger in parallel (this will generate PK index also)
=> enable PK and FK constraints 
=> gather stats (or import stats)



Sunday, September 9, 2018

Microsoft SQL Server version number


SQL Server 2019 -> 15.0.X.X
SQL Server 2017 -> 14.0.X.X
SQL Server 2016 -> 13.0.X.X
SQL Server 2014 -> 12.0.X.X
SQL Server 2012 -> 11.0.X.X
SQL Server 2008 R2 -> 10.50.X.X
SQL Server 2008 -> 10.0.X.X
SQL Server 2005 -> 9.0.X.X


Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\LOG\ERRORLOG 
Select @@version 
SELECT
SERVERPROPERTY('ProductLevel') AS ProductLevel,
SERVERPROPERTY('ProductUpdateLevel') AS ProductUpdateLevel,
SERVERPROPERTY('ProductBuildType') AS ProductBuildType,
SERVERPROPERTY('ProductUpdateReference') AS ProductUpdateReference,
SERVERPROPERTY('ProductVersion') AS ProductVersion,
SERVERPROPERTY('ProductMajorVersion') AS ProductMajorVersion,
SERVERPROPERTY('ProductMinorVersion') AS ProductMinorVersion,
SERVERPROPERTY('ProductBuild') AS ProductBuild
GO

More details :