Overview
Create a pluggable database with dbca
Invoke dbca :
Manage Pluggable Database
Create a a Pluggable Database
Create a new Pluggable Database
Summary:
Container Database: gract1
Pluggable Database: PDB1
Pluggable database source: Default
Datafile location: Use Oracle Managed Files
Configure Database Vault: No
Configure Label Security: No
Alert.log :
Fri Oct 04 16:37:29 2013
CREATE PLUGGABLE DATABASE PDB1 ADMIN USER pdb_admin IDENTIFIED BY * ROLES=(CONNECT) file_name_convert=NONE
Fri Oct 04 16:40:25 2013
****************************************************************
Pluggable Database PDB1 with pdb id - 3 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
****************************************************************
This instance was first to open pluggable database PDB1 (container=3)
Deleting old file#5 from file$
Deleting old file#7 from file$
Adding new file#16 to file$(old file#5)
Adding new file#17 to file$(old file#7)
Successfully created internal service pdb1 at open
ALTER SYSTEM: Flushing buffer cache inst=1 container=3 local
Fri Oct 04 16:40:35 2013
****************************************************************
Post plug operations are now complete.
Pluggable database PDB1 with pdb id - 3 is now marked as NEW.
****************************************************************
Completed: CREATE PLUGGABLE DATABASE PDB1 ADMIN USER pdb_admin IDENTIFIED BY * ROLES=(CONNECT) file_name_convert=NONE
alter pluggable database PDB1 open instances=all
Fri Oct 04 16:40:35 2013
This instance was first to open pluggable database PDB1 (container=3)
Pluggable database PDB1 dictionary check beginning
Pluggable Database PDB1 Dictionary check complete
Fri Oct 04 16:40:48 2013
Opening pdb PDB1 (3) with no Resource Manager plan active
XDB installed.
XDB initialized.
Fri Oct 04 16:41:01 2013
Pluggable database PDB1 opened read write
Completed: alter pluggable database PDB1 open instances=all
CREATE SMALLFILE TABLESPACE "USERS" LOGGING DATAFILE SIZE 5M AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
Completed: CREATE SMALLFILE TABLESPACE "USERS" LOGGING DATAFILE SIZE 5M AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO
ALTER DATABASE DEFAULT TABLESPACE "USERS"
Completed: ALTER DATABASE DEFAULT TABLESPACE "USERS"
Fri Oct 04 16:41:23 2013
Shared IO Pool defaulting to 64MB. Trying to get it from Buffer Cache for process 6453.
Check the current instance and storage status of a PDB
Instances ruinning for all of our CDB, PDBs
SQL> select INST_ID, INSTANCE_NUMBER, INSTANCE_NAME, HOST_NAME, DATABASE_STATUS, con_id from gv$instance;
INST_ID INSTANCE_NUMBER INSTANCE_NAME HOST_NAME DATABASE_STATUS CON_ID
---------- --------------- ---------------- -------------------- ----------------- ----------
1 1 cdb1 gract1.example.com ACTIVE 0
3 3 cdb3 gract3.example.com ACTIVE 0
2 2 cdb2 gract2.example.com ACTIVE 0
--> Our CDB is a 3 node RAC cluster
PDBs aren't running an own instances they are using the CDB instance
How many PDBs are hosted by our CDB ?
SQL> select * from v$pdbs;
CON_ID DBID CON_UID GUID NAME OPEN_MODE RES OPEN_TIME CREATE_SCN TOTAL_SIZE
---------- ---------- ---------- -------------------------------- ---------- ---------- --- ------------------------- ---------- ----------
2 4097414398 4097414398 FFE2B5D067357B55E0436F01A8C0F0F0 PDB$SEED READ ONLY NO 05-AUG-14 03.32.33.866 PM 1720758 283115520
3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 READ WRITE NO 05-AUG-14 03.39.23.762 PM 1755988 288358400
--> We have 2 PDBs - PDB1 is a user created PDB
What is the status of our PDBs ?
SQL> select INST_ID, CON_ID, DBID, CON_UID, GUID, NAME, OPEN_MODE, RESTRICTED from gv$pdbs where NAME='PDB1';
INST_ID CON_ID DBID CON_UID GUID NAME OPEN_MODE RES
---------- ---------- ---------- ---------- -------------------------------- ------ ---------- ---
3 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 READ WRITE NO
1 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 READ WRITE NO
2 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 READ WRITE NO
--> PDB1 is running on all 3 RAC nodes ( cdb1,cdb2,cdb3 )
Storage
SQL> select file_name, con_id from cdb_data_files;
FILE_NAME CON_ID
---------------------------------------------------------------------------------- ----------
+DATA/CDB/DATAFILE/system.273.854810097 1
+DATA/CDB/DATAFILE/sysaux.283.854809845 1
+DATA/CDB/DATAFILE/undotbs1.279.854810307 1
+DATA/CDB/DATAFILE/users.295.854810303 1
+DATA/CDB/DATAFILE/undotbs2.294.854810989 1
+DATA/CDB/DATAFILE/undotbs3.286.854810995 1
+DATA/CDB/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/system.275.854810401 2
+DATA/CDB/DD7C48AA5A4404A2E04325AAE80A403C/DATAFILE/sysaux.281.854810401 2
+DATA/CDB/FFE30B05B94B1D25E0436F01A8C05EFE/DATAFILE/system.272.854811641 3
+DATA/CDB/FFE30B05B94B1D25E0436F01A8C05EFE/DATAFILE/sysaux.278.854811821 3
+DATA/CDB/FFE30B05B94B1D25E0436F01A8C05EFE/DATAFILE/users.270.854812071 3
--> PDB1 ( con_id = 3 ) has a system,sysaux and users tablespace
PDB$SEED ( con_id = 2 ) has a system,sysaux tablespace only
root container ( con_id =1 ) has a system, sysaux, undotbs1, undotbs2, undotbs3 an an users tablespace
Note: Each RAC node has its own undo tablespace
Start and stop a pluggable database in a RAC cluster
Startup and shutdown a PDBs
SQL> connect sys/sys@gract-scan.grid12c.example.com:/cdb as sysdba
Connected.
Stop a single PDB
SQL> alter pluggable database PDB1 close instances=('cdb1');
SQL> select INST_ID, CON_ID, DBID, CON_UID, GUID, NAME, OPEN_MODE, RESTRICTED from gv$pdbs where NAME='PDB1';
INST_ID CON_ID DBID CON_UID GUID NAME OPEN_MODE RES
---------- ---------- ---------- ---------- --------------------- ---------- ---------- ---
3 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 READ WRITE NO
1 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 MOUNTED
2 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 READ WRITE NO
Stop 2 PDBs
SQL> alter pluggable database PDB1 close instances=('cdb2','cdb3');
Pluggable database altered.
SQL> select INST_ID, CON_ID, DBID, CON_UID, GUID, NAME, OPEN_MODE, RESTRICTED from gv$pdbs where NAME='PDB1';
INST_ID CON_ID DBID CON_UID GUID NAME OPEN_MODE RES
---------- ---------- ---------- ---------- ----------------------- ---------- ---------- ---
1 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 MOUNTED
3 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 MOUNTED
2 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 MOUNTED
Open 2 PDBs
SQL> alter pluggable database PDB1 open instances=('cdb1','cdb3');
Pluggable database altered.
SQL> select INST_ID, CON_ID, DBID, CON_UID, GUID, NAME, OPEN_MODE, RESTRICTED from gv$pdbs where NAME='PDB1';
INST_ID CON_ID DBID CON_UID GUID NAME OPEN_MODE RES
---------- ---------- ---------- ---------- -------------------------------- ---------- ---------- ---
2 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 MOUNTED
1 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 READ WRITE NO
3 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 READ WRITE NO
Stop a single PDB
SQL> alter pluggable database PDB1 open instances=('cdb2');
Pluggable database altered.
SQL> select INST_ID, CON_ID, DBID, CON_UID, GUID, NAME, OPEN_MODE, RESTRICTED from gv$pdbs where NAME='PDB1';
INST_ID CON_ID DBID CON_UID GUID NAME OPEN_MODE RES
---------- ---------- ---------- ---------- -------------------------------- ---------- ---------- ---
1 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 READ WRITE NO
3 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 READ WRITE NO
2 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE PDB1 READ WRITE NO
Open and close all PDB instances
SQL> alter pluggable database PDB1 open instances=all;
SQL> alter pluggable database PDB1 close instances=all;
CLose a PDB with immediate option
SQL> alter pluggable database PDB1 close immediate instances=('cdb3');
Verify that pdb1 is registered with the listeners:
$ lsnrctl status
..
Service "pdb1" has 1 instance(s).
Instance "gract1_1", status READY, has 1 handler(s) for this service...
The command completed successfully
--> local listener only knows local PDB service running on local PDB
$ lsnrctl status LISTENER_SCAN1
Service "pdb1" has 3 instance(s).
Instance "cdb1", status READY, has 1 handler(s) for this service...
Instance "cdb2", status READY, has 1 handler(s) for this service...
Instance "cdb3", status READY, has 1 handler(s) for this service...
--> SCAN listener knows all PDB services currently running on instances cdb1,cdb2 and cdb3
Connect to a PDB instance
Connect to PDP service using EZconnect
Find SCAN address and connect to a service[root@gract1 Desktop]# srvctl config scan
SCAN name: gract-scan.grid12c.example.com, Network: 1
Subnet IPv4: 192.168.1.0/255.255.255.0/eth1
...
[root@gract1 Desktop]# ping gract-scan.grid12c.example.com
PING gract-scan.grid12c.example.com (192.168.1.186) 56(84) bytes of data.
64 bytes from 192.168.1.186: icmp_seq=1 ttl=64 time=0.025 ms..
..
Check service status to which we want to connect to :
[grid@gract1 ~]$ crs | egrep 'hr.svc|NAME|----'
NAME TARGET STATE SERVER STATE_DETAILS
------------------------- ---------- ---------- ------------ ------------------
ora.cdb.hr.svc ONLINE ONLINE gract1 STABLE
ora.cdb.hr.svc ONLINE ONLINE gract2 STABLE
Testing Load Balacning by conneting to the SCAN address and using service hr
SQL> connect system/sys@gract-scan.grid12c.example.com:/hr
SQL> select 'Connect Info - Host: ' || sys_context('USERENV', 'SERVER_HOST') || ' PDB: ' || sys_context('USERENV', 'CON_NAME') || ' CDB_instance: '
|| sys_context('USERENV', 'INSTANCE_NAME') || ' Service: ' || sys_context('USERENV', 'SERVICE_NAME') PDB_CONNECT_INFO from sys.dual;
PDB_CONNECT_INFO
--------------------------------------------------------------------------------
Connect Info - Host: gract1 PDB: PDB1 CDB_instance: cdb1 Service: hr
SQL> connect system/sys@gract-scan.grid12c.example.com:/hr
PDB_CONNECT_INFO
--------------------------------------------------------------------------------
Connect Info - Host: gract2 PDB: PDB1 CDB_instance: cdb2 Service: hr
SQL> connect system/sys@gract-scan.grid12c.example.com:/hr
PDB_CONNECT_INFO
--------------------------------------------------------------------------------
Connect Info - Host: gract1 PDB: PDB1 CDB_instance: cdb1 Service: hr
--> Load balancing between nodes ( gract1,gract2 ) running service hr works fine
Testing dedicated server connections to node gract1,gract2,gract3
SQL> connect system/sys@gract1:1521/hr
PDB_CONNECT_INFO
--------------------------------------------------------------------------------
Connect Info - Host: gract1 PDB: PDB1 CDB_instance: cdb1 Service: hr
SQL> connect system/sys@gract2:1521/hr
PDB_CONNECT_INFO
--------------------------------------------------------------------------------
Connect Info - Host: gract2 PDB: PDB1 CDB_instance: cdb2 Service: hr
SQL> connect system/sys@gract3:1521/hr
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
---> Connect to hr severvice on cdb3/pdb1 fails as hr service is not running on gract3
Connect to PDB instance via tnsnames.ora
pdb1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = gract-scan.grid12c.example.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = pdb1)
)
)
$ sqlplus pdb_admin/sys@pdb1
SQL> show con_name;
CON_NAME
------------------------------
PDB1
Connect via alter session set container
SQL> connect / as sysdba.
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB_PLUG_NOCOPY2 MOUNTED
SQL> alter session set container=PDB_PLUG_NOCOPY2;
Session altered.
SQL> sho con_name;
CON_NAME
------------------------------
PDB_PLUG_NOCOPY2
What happens to PDP services when closing a PDB or when CDB crashes ?
Working with services
Add a service to your PDB
[oracle@gract1 ~]$ srvctl add service -db cdb -service hr -pdb pdb1 -preferred cdb1,cdb2 -available cdb3
Check service status
[grid@gract1 ~]$ crs | egrep 'hr.svc|NAME|----'
NAME TARGET STATE SERVER STATE_DETAILS
------------------------- ---------- ---------- ------------ ------------------
ora.cdb.hr.svc OFFLINE OFFLINE gract1 STABLE
ora.cdb.hr.svc OFFLINE OFFLINE gract1 STABLE
Start service
[grid@gract1 ~]$ srvctl start service -db cdb -service hr
[grid@gract1 ~]$ crs | egrep 'hr.svc|NAME|----'
NAME TARGET STATE SERVER STATE_DETAILS
------------------------- ---------- ---------- ------------ ------------------
ora.cdb.hr.svc ONLINE ONLINE gract1 STABLE
ora.cdb.hr.svc ONLINE ONLINE gract2 STABLE
Relocate Service
NAME TARGET STATE SERVER STATE_DETAILS
------------------------- ---------- ---------- ------------ ------------------
ora.cdb.hr.svc ONLINE ONLINE gract1 STABLE
ora.cdb.hr.svc ONLINE ONLINE gract2 STABLE
[grid@gract1 ~]$ srvctl relocate service -db cdb -service hr -oldinst cdb1 -newinst cdb3
[grid@gract1 ~]$ crs | egrep 'hr.svc|NAME|----'
NAME TARGET STATE SERVER STATE_DETAILS
------------------------- ---------- ---------- ------------ ------------------
ora.cdb.hr.svc ONLINE ONLINE gract3 STABLE
ora.cdb.hr.svc ONLINE ONLINE gract2 STABLE
Bring back this service again to original node
[grid@gract1 ~]$ srvctl relocate service -db cdb -service hr -oldinst cdb3 -newinst cdb1
[grid@gract1 ~]$ crs | egrep 'hr.svc|NAME|----'
NAME TARGET STATE SERVER STATE_DETAILS
------------------------- ---------- ---------- ------------ ------------------
ora.cdb.hr.svc ONLINE ONLINE gract1 STABLE
ora.cdb.hr.svc ONLINE ONLINE gract2 STABLE
--->PDB based Service relocation works fine - there is difference the way services work on CDBs
Stop Service
[grid@gract1 ~]$ srvctl start service -db cdb -service hr
[grid@gract1 ~]$ crs | egrep 'hr.svc|NAME|----'
NAME TARGET STATE SERVER STATE_DETAILS
------------------------- ---------- ---------- ------------ ------------------
ora.cdb.hr.svc OFFLINE OFFLINE gract1 STABLE
ora.cdb.hr.svc OFFLINE OFFLINE gract1 STABLE
Service registration with local and scan listener
[grid@gract1 ~]$ lsnrctl status listener
Service "hr" has 1 instance(s).
Instance "cdb1", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "cdb1", status READY, has 1 handler(s) for this service...
[grid@gract1 ~]$ lsnrctl status LISTENER_SCAN1
Service "hr" has 2 instance(s).
Instance "cdb1", status READY, has 1 handler(s) for this service...
Instance "cdb2", status READY, has 1 handler(s) for this service...
Service "pdb1" has 3 instance(s).
Instance "cdb1", status READY, has 1 handler(s) for this service...
Instance "cdb2", status READY, has 1 handler(s) for this service...
Instance "cdb3", status READY, has 1 handler(s) for this service...
--> Local listener knows local services runnung on instance cdb1
SCAN listener knows all services running on instances cdb1,cdb2 and cdb3
Testing service failover after PDB shutdown ( shutdown immediatede ) and CDB crash ( shutdown abort )
Status:
[grid@gract1 ~]$ srvctl start service -db cdb -s hr
SQL> select INST_ID, NAME ,SESSION_STATE_CONSISTENCY, global from gv$active_services where name like 'pdb%' or name like '%hr%';
INST_ID NAME SESSION_STATE_CONSISTENCY GLO
---------- --------- ------------------------------ ---
3 pdb1 NO
2 hr DYNAMIC NO
2 pdb1 NO
1 hr DYNAMIC NO
1 pdb1 NO
Stop the PDB
SQL> alter pluggable database PDB1 close immediate instances=('cdb2');
Pluggable database altered.
SQL> select INST_ID, NAME ,SESSION_STATE_CONSISTENCY, global from gv$active_services where name like 'pdb%' or name like '%hr%';
INST_ID NAME SESSION_STATE_CONSISTENCY GLO
---------------- -------------------------- ---
1 hr DYNAMIC NO
1 pdb1 NO
3 pdb1 NO
2 pdb1 NO
--> Only hr service is removed from nde cd2 : pdb1 service is still active on cdb2
Verifying the PDB service
SQL> connect system/sys@gract2:/pdb1
ERROR:
ORA-01033: ORACLE initialization or shutdown in progress
Process ID: 0
Session ID: 0 Serial number: 0
Warning: You are no longer connected to ORACLE.
SQL> connect system/sys@gract2:/hr
ERROR:
ORA-12514: TNS:listener does not currently know of service requested in connect
descriptor
<<<<<< Not sure whether we hit a BUG here - hr services shows the expected behaviour
Restart CDB and open PDB again
SQL> alter pluggable database PDB1 open instances=('cdb2');
INST_ID CON_ID DBID CON_UID NAME OPEN_MODE RES
---------- ---------- ---------- ---------- ------------ ---------- ---
2 3 3362522988 3362522988 PDB1 READ WRITE NO
1 3 3362522988 3362522988 PDB1 READ WRITE NO
3 3 3362522988 3362522988 PDB1 MOUNTED
ora.cdb.hr.svc ONLINE ONLINE gract1 STABLE
ora.cdb.hr.svc OFFLINE OFFLINE gract2 STABLE
--> service remain OFFLINE even after PDB was restarted on INST_ID 2
this behavior is documented - services on PDB needs to be restarted on startup
Testing failover after a CDB crash ( shutdown abort )
SQL> select INST_ID, NAME ,SESSION_STATE_CONSISTENCY, global from gv$active_services where name like 'pdb%' or name like '%hr%';
INST_ID NAME SESSION_STATE_CONSISTENCY GLO
---------- ------------ ------------------------------ ---
3 pdb1 NO
2 hr DYNAMIC NO
2 pdb1 NO
1 hr DYNAMIC NO
1 pdb1 NO
--> Run a shutdown abort for INST_ID 2 ( == cdb2 ) - services hr and pdb1 should be terminated asap
[oracle@gract2 ~]$ sqlplus / as sysdba
SQL> shutdown abort
SQL> select INST_ID, NAME ,SESSION_STATE_CONSISTENCY, global from gv$active_services where name like 'pdb%' or name like '%hr%';
INST_ID NAME SESSION_STATE_CONSISTENCY GLO
---------- ------------------ ------------------------------ ---
3 hr DYNAMIC NO
3 pdb1 NO
1 hr DYNAMIC NO
1 pdb1 NO
--> successfull failover of service hr from cdb2 to cdb3
Summary:
- PDB based services fail over if the hosting instance crashes or cdb instance has been stopped with shutdown abort
- Closing a PDB ( even with immediate switch ) doesn’t failover the PDB service . In that case you need to relocate the PDB services manually ( Not sure whether this is a Bug or not )
Unplug PDB database and keep ASM datafiles
SQL> alter pluggable database pdb1 close immediate;
Pluggable database altered.
SQL> alter pluggable database pdb1 unplug into '/home/oracle/PDB/pdb1_unplug.xml';
alter pluggable database pdb1 unplug into '/home/oracle/PDB/unplugg_pdb1.xml'
*
ERROR at line 1:
ORA-65025: Pluggable database PDB1 is not closed on all instances.
--> Close all pluggable database on all instances
SQL> alter pluggable database pdb1 close instances=all;
Pluggable database altered.
SQL> alter pluggable database pdb1 unplug into '/home/oracle/PDB/pdb1_unplug.xml';
Pluggable database altered.
SQL> drop pluggable database pdb1 keep datafiles;
Pluggable database dropped.
Verify that PDB datafiles are still their but our PDB is finally dropped from Data dictionary
$ asmcmd ls +DATA/GRACT1/E7EC3DF06C5D1840E0436F01A8C0692E/DATAFILE/
SYSAUX.285.827944651
SYSTEM.284.827944651
USERS.287.827944875
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
Plugin a PDB
SQL> create pluggable database pdb_plug_nocopy2 using '/home/oracle/PDB/pdb1_unplug.xml'
COPY TEMPFILE REUSE;
Pluggable database created.
SQL> show pdbs;
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB_PLUG_NOCOPY2 MOUNTED
SQL> alter pluggable database PDB_PLUG_NOCOPY2 open;
Pluggable database altered.
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ---------------------- ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB_PLUG_NOCOPY2 READ WRITE NO
Using PDBs with Services and Serverpools
Current status 1 serverpool - 3 nodes
Server pool name: 12c_pool
Active servers count: 3
Active server names: gract1,gract2,gract3
NAME=gract1 STATE=ONLINE
NAME=gract2 STATE=ONLINE
NAME=gract3 STATE=ONLINE
Create a service for our PDB and verify the service locations
$ srvctl add service -db gract1 -service pdb_s1 -pdb pdb_plug_nocopy2 -d 12c_pool
SQL> select INST_ID, NAME ,pdb from gv$services where name like 'pdb%';
INST_ID NAME PDB
---------- ------------------------------ ------------------------------
1 pdb_plug_nocopy2 PDB_PLUG_NOCOPY2
2 pdb_plug_nocopy2 PDB_PLUG_NOCOPY2
3 pdb_plug_nocopy2 PDB_PLUG_NOCOPY2
--> PDB database PDB_PLUG_NOCOPY2 is currently serviced by any of our CDBs ( gract1_1, gract1_2, gract_3)
Create a new server pool and verify serverpool status
$ srvctl add serverpool -serverpool NEW_12c_pool -min 1 -max 2 -i 5 -f
$ srvctl status serverpool -detail
..
Server pool name: 12c_pool
Active servers count: 2
Active server names: gract2,gract3
NAME=gract2 STATE=ONLINE
NAME=gract3 STATE=ONLINE
Server pool name: NEW_12c_pool
Active servers count: 1
Active server names: gract1
NAME=gract1 STATE=ONLINE
--> gract1 is moved from 12c_pool to newly created serverpool NEW_12c_pool
SQL> select INST_ID, NAME ,pdb from gv$services where name like 'pdb%';
INST_ID NAME PDB
---------- ------------------------------ ------------------------------
2 pdb_plug_nocopy2 PDB_PLUG_NOCOPY2
3 pdb_plug_nocopy2 PDB_PLUG_NOCOPY2
--> PDB database PDB_PLUG_NOCOPY2 is now only serviced by gract2 and gract3 whereas gract1 can do some work for other RAC/CDBs
Error ORA-65319 after create pluggable database
Steps to reproduce:
- close the PDBs on all instances
- alter pluggable database pdb1 unplug into ....
- drop pluggable database pdb1 keep datafiles;
- create pluggable database with NOCOPY ( ( here datafile header will be changed )
- open that PDB ( here datafile header will be changed )
- close and drop the newly plugged in PDB again ( using keep datafiles )
- try to create the pluggable database with NOCOPY
--> ORA-65319 during create pluggable database
SQL> create pluggable database pdb_plug_nocopy2 using '/home/oracle/PDB/pdb1_unplug.xml'
NOCOPY TEMPFILE REUSE;
create pluggable database pdb_plug_nocopy2 using '/home/oracle/PDB/pdb1_unplug.xml'
ERROR at line 1:
ORA-65139: Mismatch between XML metadata file and data file
+DATA/GRACT1/E7EC3DF06C5D1840E0436F01A8C0692E/DATAFILE/system.284.827944651 for
value of afn (16 in the plug XML file, 19 in the data file)
---> Not sure whether this is supported but in case of ORA-65139 the PDB can be reopened by editing pdb1_unplug.xml.
---> Sample change afn to 19 in /home/oracle/PDB/pdb1_unplug.xml .
Do the same for other XML properties like : fcpsb , createscnbas
Create Common and Local users
Create a COMMON user
SQL> connect system/sys@gract-scan.grid12c.example.com:/cdb
SQL> create user c##cdb_admin identified by x container=all;
SQL> GRANT CREATE SESSION TO c##cdb_admin container=all;
Connect to CDB
SQL> connect c##cdb_admin/x@gract-scan.grid12c.example.com:/cdb
Connected.
Connect to PDB
SQL> connect c##cdb_admin/x@gract-scan.grid12c.example.com:/pdb1
Connected.
Create a LOCAL user
SQL> connect system/sys@gract-scan.grid12c.example.com:/cdb
SQL> create user local_admin identified by x container=current;
create user local_admin identified by x container=current
ERROR at line 1:
ORA-65049: creation of local user or role is not allowed in CDB$ROOT
--> Need to connect to PDB
SQL> connect system/sys@gract-scan.grid12c.example.com:/pdb1
SQL> create user local_admin identified by x container=current;
User created.
SQL> GRANT CREATE SESSION TO local_admin container=current;
Grant succeeded.
SQL> connect local_admin/x@gract-scan.grid12c.example.com:/pdb1
Connected.
SQL> connect local_admin/x@gract-scan.grid12c.example.com:/cdb
ERROR:
ORA-01017: invalid username/password; logon denied
--> If users are not defined in CDB/PDBs you will get: ORA-1017
Verify user by querying CDB_USERS view
SQL> select USERNAME,COMMON,CON_ID from cdb_users where USERNAME like '%CDB_ADMIN%' or USERNAME like '%LOCAL_ADMIN%';
USERNAME COM CON_ID
---------------- --- ----------
C##CDB_ADMIN YES 1
C##CDB_ADMIN YES 3
LOCAL_ADMIN NO 3
--> User LOCAL_ADMIN is only defined in container 3 which is our PDB
User CDB_ADMIN connect connect to CDB and PDB1
PDB FAQ
How do I know if my database is Multitenant or not ?
SQL> select NAME, DECODE(CDB, 'YES', 'Multitenant Option enabled', 'Regular 12c Database: ') "Multitenant Option ?" ,
OPEN_MODE, CON_ID from V$DATABASE;
NAME Multitenant Option ? OPEN_MODE CON_ID
--------- -------------------------- -------------------- ----------
CDB Multitenant Option enabled READ WRITE 0
How do I find all the CDB and all PDBs
SQL> SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;
NAME CON_ID DBID CON_UID GUID
------------------------------ ---------- ---------- ---------- --------------------------------
CDB$ROOT 1 1971413870 1 DD7C48AA5A4504A2E04325AAE80A403C
PDB$SEED 2 4097414398 4097414398 FFE2B5D067357B55E0436F01A8C0F0F0
PDB1 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE
What Pluggable databases do we have in this container database ?
SQL> select CON_ID, NAME, OPEN_MODE from V$PDBS;
CON_ID NAME OPEN_MODE
---------- ------------------------------ ----------
3 PDB1 READ WRITE
How do I connect to a Pluggable Database , say, PDB1 and back to root container ?
SQL> alter session set container = pdb1;
SQL> show con_name
CON_NAME
------------------------------
PDB1
SQL> ALTER SESSION SET CONTAINER = CDB$ROOT;
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
Can I monitor SGA usage on a PDB by PDB basis?
There are single SGA shared by all pluggable databases. However, you can determine SGA consumptions by all containers i.e, root and PDB.
SQL> alter session set container=CDB$ROOT;
SQL> SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;
NAME CON_ID DBID CON_UID GUID
------------------------------ ---------- ---------- ---------- --------------------------------
CDB$ROOT 1 1971413870 1 DD7C48AA5A4504A2E04325AAE80A403C
PDB$SEED 2 4097414398 4097414398 FFE2B5D067357B55E0436F01A8C0F0F0
PDB1 3 3362522988 3362522988 FFE30B05B94B1D25E0436F01A8C05EFE
SQL> select POOL, NAME, BYTES from V$SGASTAT where CON_ID = 3;
POOL NAME BYTES
------------ -------------------------- ----------
shared pool ktli log buffer pools 616
shared pool KQR X PO 807856
...
large pool SWRF Metric Eidbuf 368640
large pool SWRF Metric CHBs 1802240
SQL> select CON_ID, POOL, sum(bytes) from v$sgastat group by CON_ID, POOL order by CON_ID, POOL;
CON_ID POOL SUM(BYTES)
---------- ------------ ----------
0 java pool 4194304
0 large pool 2179072
0 shared pool 23187808
0 122522608
1 large pool 1867776
1 shared pool 241739224
2 large pool 2170880
2 shared pool 5147720
3 large pool 2170880
3 shared pool 27720832
Can I monitor PGA usage on a PDB by PDB basis?
SQL> select CON_ID, sum(PGA_USED_MEM), sum(PGA_ALLOC_MEM), sum(PGA_MAX_MEM) from v$process group by CON_ID order by CON_ID;
CON_ID SUM(PGA_USED_MEM) SUM(PGA_ALLOC_MEM) SUM(PGA_MAX_MEM)
---------- ----------------- ------------------ ----------------
0 231006520 297505736 399513864
1 9719884 16203316 20004404
How can I identify shared objects in the data dictionary ?
OBJECT LINK and METADATA LINK objects are shared objects
SQL> select count(sharing),sharing from cdb_objects group by sharing;
COUNT(SHARING) SHARING
-------------- -------------
19152 NONE
441 OBJECT LINK
252646 METADATA LINK
References