Oracle 11g Architecture - Explained in Detail!
COMPONENTS
OF ORACLE DATABASE ARCHITECTURE:
USER PROCESS : It is the process which
places request from client side and will be created when user starts any
application.
SERVER PROCESS : It is the process which will do work on behalf of user process on server side.
PRIVATE GLOBAL AREA (PGA):
1.
It is the memory area allocated to server process to perform
execution of the SQL statement & to store session information.
- The size of memory allocated
will be defined using PGA_AGGREGATE_TARGET
3.
Before 9i, PGA is configured using
a.
WORK_AREA_SIZE
b.
BITMAP_WORK_AREA
c.
SORT_AREA_SIZE
d.
HASH_AREA_SIZE etc
4.
Sorting will takes place in PGA if the data is small in size. This
is called as in-memory sort.
5.
If the data size is larger than sort are size of PGA, Oracle will
use both PGA and TEMP table space which needs no.of I/O’s and
automatically database performance will get degraded.
ORACLE INSTANCE: It is a way through which users will access / modify data in the database. It is a combination of memory structures and background processes.
SHARED GLOBAL AREA (SGA) : It is the memory area which contains several memory caches helpful in reading and writing data.
SHARED POOL:
1.
Shared pool contains following components
a.
Library cache – it contains shared SQL & PL/SQL statements.
b.
Data dictionary cache – it contains dictionary information in the
form of rows, hence also called as row cache.
Size of shared pool is defined using SHARED_POOL_SIZE.
DATABASE BUFFER CACHE:
1.
It is the memory area where a copy of the data is placed in LRU
list.
2.
The status of block in DBC will be any of the following status
a.
UNUSED – block which is never used
b.
FREE – block which is used already but currently it is free
c.
PINNED – block currently in use
d.
DIRTY – block which got modified
- DBC contains LRU list and WRITE list which helps in splitting
modified blocks with other blocks.
4.
Size of DBC is defined using DB_CACHE_SIZE or DB_BLOCK_BUFFERS.
LOG BUFFER CACHE : It is the memory area where a copy of redo entries are maintained and size is defined by LOG_BUFFER.
Note: LBC should be allotted with smallest size than any other memory component in SGA.
LARGE POOL:
1.
Large pool will be used efficiently at the time of RMAN backup.
2. Large pool can dedicate some of its memory to shared pool and gets back whenever shared pool is observing less free space.
2. Large pool can dedicate some of its memory to shared pool and gets back whenever shared pool is observing less free space.
3.
Size is defined using LARGE_POOL_SIZE.
JAVA POOL : It is memory area used to run java executables (like JDBC driver) and size is defined using JAVA_POOL_SIZE.
STREAM POOL:
1.
It is the memory area used when replicating a database using
oracle streams.
2.
This parameter is introduced in 10g and can be defined using
STREAM_POOL_SIZE.
- If stream pool is not
defined and streams are used, then 10% of memory from shared pool will be
used. This may affect the database performance.
SMONn : It is the background process
responsible for following actions
1.
Instance recovery – this will be done in following phases
a.
Roll forward – compares the SCN between redo log files and data
files header and will make sure committed data is written to data files.
b.
Opens the database for user access.
c.
Rollbacks uncommitted transactions with the help of undo table
space.
2.
It will coalesce the table spaces which are defined as automatic
segment space management.
3. It will release the temporary
segments occupied by the transactions when they are completed
The below diagram’s will explain SMONn instance recovery in detail:
1.
We know that LGWR wiill write redo entries into redo log files.
But if we have more and more redo entries generated (for huge transactions),
redo log file size increases and even terabytes of storage is not sufficient.
2.
To overcome this Oracle designed its architecture so that LGWR
will write into 2 or more redo log files in a cyclic order (shown in the below
diagram)
LGWR moving from one redo log
file to another is called LOG SWITCH. At the time of log switch, following
actions will take place
·
Checkpoint event will occur – this tells that committed data
should be made permanent to data files. (Eg: Its just like automatic saving of
email when composing in gmail)
·
CKPT process will update the latest SCN to data file header and
control files by taking the info from redo log files.
·
DBWRn will write the corresponding dirty blocks from write list to
data files.
·
ARCHn process will generate archives (copy of online redo log
files) only if database is in archive log mode.
Note: Checkpoint event not only occurs at log
switch. It can occur at repeated interval and this is decided by a parameter
LOG_CHECKPOINT_INTERVAL (till 8i) and FAST_START_MTTR_TARGET (from 9i)
Background Processors:
PMON : It is responsible for
following actions
1.
releases the locks and resources held by abruptly terminated sessions
Note: whenever any user performs DML transactions
on a table, oracle will apply lock. This is to maintain read consistency
2.
authenticates the user
3.
registers the listener information with the instance
4.
restarts dead dispatchers at the time of instance recovery in case
of shared server architecture
DBWRn: It is responsible in writing dirty buffers from write list to datafiles and it will do this action in following situations
1.
after LGWR writes
2.
when write list reaches threshold value
3.
at every checkpoint
4.
when tablespace is taken offline or placed in read-only mode
5.
when database is shutdown cleanly
LGWR : It is responsible for writing
redo entries from log buffer cache to redolog files and it will perform this in
following situations
1.
before DBWRn writes
2.
whenever commit occurs
3.
when log buffer cache is 1/3rd full
4.
when 1 MB of redo is generated
5.
every 3 sec
CKPT : it will update the latest
SCN to control files and datafile header by taking information from redolog
files.
This will happen at every log switch
This will happen at every log switch
ARCHn : It will generated offline
redolog files in specified location. This will be done only if database is in
archivelog mode
DATAFILES : actual files where user data will be stored
REDOLOG FILES : files contains redo entries which are helpful in database recovery. To avoid space constraints oracle will create two or more redolog files and LGWR will write into them in a cyclic order
1.
database name and creation timestamp
2.
latest SCN
3.
location and sizes of redolog files and datafiles
4.
parameters that define the size of controlfile
ARCHIVED REDOLOG FILES : These files will be created by ARCHn process if archivelog mode is enabled. The size of archives will be equal or less than redolog files
ASMM
(AUTOMATIC SHARED MEMORY MANAGEMENT):
1.
In 9i, SGA has been made
dynamic i.e. sizes of SGA components can be changed without shutting down the
database. (not possible in 8i)
- Many times DBA’s faced problem in calculating correct memory
sizes which lead to performance problems in instance level and DBA’s are
more involved in handling instance tuning issues. To avoid this, oracle
10g introduced ASMM.
3.
The following memory components
are automatically sized when using ASMM
a. Shared pool
a. Shared pool
b.
database buffer cache
c.
large pool
d.
java pool
e.
stream pool
Note: LOG_BUFFER will not be automatically sized in any version. It’s a
static parameter.
4.
Using ASMM, we can define total
memory to SGA and oracle will decide how much to distribute to all caches. This
is possible by setting SGA_TARGET parameter. (new in 10g)
Note: to enabled ASMM, we should define STATISTICS_LEVEL = TYPICAL
(default) or ALL. To know more about ASMM.
5.
Maximum size for SGA is defined
by SGA_MAX_SIZE. Depends on transactions load, SGA size will vary from
SGA_TARGET to SGA_MAX_SIZE.
6.
It’s been observed that
individual parameters are also defined in some 10g databases which means, those
values will act as min values and SGA_TARGET value will act as medium and
SGA_MAX_SIZE as max value.
7.
Oracle 10g introduced new
background process MMAN in order to manage the memory for SGA components.
Note: Memory allocation to both SGA and PGA has been made automated in
11g using MEMORY_TARGET and MEMORY_MAX_TARGET parameters.
Note: SGA size is not at all dependent on database size and will be calculated based on the transactions hitting the database.
1.
The following are the logical
structures of database and will be helpful in easy manageability of the
database
a.
TABLESPACE – an alias name to a
group of datafiles (or) group of segments (or) a space where tables reside
b.
SEGMENT – group of extents (or)
object that occupies space
c.
EXTENT – group of oracle data
blocks (or) memory unit allocated to the object
d.
ORACLE DATA BLOCK – basic unit
of data storage (or) group of operating system blocks
2.
The following tablespaces are
mandatory to exist in 10g database
b.
SYSAUX – auxiliary tablespace
to SYSTEM which also stores base tables required for reporting purpose
c.
TEMP – used for performing sort
operations
d.
UNDO – used to store before
images helpful for rollback of transaction or instance recovery
Note: Oracle 9i should have all the above tablespaces except SYSAUX.
SYSAUX is introduced in 10g to avoid burden on SYSTEM tablespace.
1.
Server process performs parsing
in library cache by taking the statement information. Optimizer will generate
the best execution plan based on time & cost.
2.
By following the execution plan
statement will get executed in PGA.
3.
After execution, Server process
will search for the data in LRU list. If exists, it will copy undo block to LRU
list. If data is not found, then it will copy both data block and undo block to
LRU list.
4.
From there those blocks will be
copied to PGA where modifications will be done by which redo entries will be
generated in PGA which are copied to redolog buffer cache by server process.
Note: A single atomic change happened to the database is called redo
entry or redo record or change vector. E.g.: if 100 rows are modified, then we
will have 200 redo entries.
5.
Modifications is done by
copying previous image from data block to undo block and new value will be
inserted into data block thus making both the blocks DIRTY.
6.
The dirty blocks will be moved
to write list from where DBWRn will write them to corresponding datafiles. But
before DBWRn writes, LGWR writes the content of log buffer cache to redolog
files.
Note
: LGWR writing before DBWRn is called
WRITE-AHEAD protocol.
DDL STATEMENT PROCESSING:
1.
DDL statement processing is
same as DML processing as internally all DDL are DML statements to the base
tables.
2.
For every DDL statement, base
tables will get modified with update/delete/insert statements. Because of this
reason, in case of DDL also undo will be generated.
PHASES OF SQL EXECUTION:
1.
Any SQL statement will undergo following phases to get executed
a.
PARSING : This phase will perform following actions
i.
Syntax checking of the SQL statement
ii.
Semantic checking of the statement i.e. checking for the
privileges using base tables
iii.
Diving the statement into literals
b.
EXECUTION : This phase will perform following actions
.
Converting the statement into ASCII format
i.
Compiling the statement
ii.
Running or executing the statement
c.
FETCH : Data will be retrieved in this phase
Note: For a PL/SQL program, BINDING will happen
after PARSING phase (so it will have 4 phases to go)
SELECT STATEMENT PROCESSING:
1.
Server process will receive the statement sent by user process on
server side and will handover that to library cache of shared pool
2.
The 1st phase of sql execution i.e. Parsing will be
done in library cache
3.
Then, OPTIMIZER (brain of oracle sql engine) will generate many
execution plans, but chooses the best one based on time & cost (time –
response time, cost – cpu resource utilization)
4.
Server process will send the parsed statement with its execution
plan to PGA and 2nd phase i.e. EXECUTION will be done there
5.
After execution, server process will start searching for the data
from LRU end of LRU list and this search will continue till it founds data or
reaches MRU end. If it found data, it will be given to the user. If it didn’t
found any data, it means data is not there in database buffer cache
6.
In such cases, server process will copy data from datafiles to MRU
end of LRU list of database buffer cache
7.
From MRU end again blocks will be copied to PGA for filtering
required rows and then it will be given to user (displayed on user’s console)
Note: server process will not start searching from
MRU end because there may be a chance of missing the data by the time it
reaches LRU end in searching.
Note: for statements issued for the second time, parsing and fetch phases are skipped, subject to the availability of data and parsed statement in the instance.
1.
Only one undo tablespace will be in action at a given time
2.
Undo tablespace features are enabled by setting following
parameters
a.
UNDO_MANAGEMENT
b.
UNDO_TABLESPACE
c.
UNDO_RETENTION
Imp
points to remember:
1. The undo blocks occupied by a transaction will become expired once the transaction commits.
1. The undo blocks occupied by a transaction will become expired once the transaction commits.
2. The data will be selected
from undo tablespace, if any DML operation is being performed on the table on
which select query is also fired. This is to maintain read consistency.
Tx1 → Updating table A and
commited
Tx2 → updating table B
Tx3 → selecting data from
In the above situation, Tx1
issued an update statement on table A and committed. Because of this dirty
blocks are generated in DBC and undo blocks are used from undo tablespace.
Also, dirty blocks of A are not yet written to datafiles
Tx2 is updating table B and
because of non availability of undo blocks, Tx2 overrided expired undo blocks
of Tx1
Tx3 is selecting the data from
A. This operation will first look for data in undo tablespace, but already
blocks of A are occupied by B (Tx2), it will not retrieve any data. Then it
will check for latest data in datafiles, but as dirty blocks are not yet
written to datafiles, there are transaction will be unable to get data. In this
situation it will throw ORA-1555 (snapshot too old) error
Soltuions to avoid ORA-1555:
1.
Re-issuing the SELECT statement will be a solution when we are
getting ora-1555 very rarley
2.
It may occur due to undersized undo tablespace. So increasing undo
tablespace size is one solution
3.
Increasing undo_retention value is also a solution
4.
Avoiding frequent commits
5.
Using “retention gurantee” clause with DML statement. This is only
from 10g
Note :
Don’t ever allow undo & Temp tablespaces to be in AUTOEXTEND ON.
COMMANDS:
# To create UNDO tablespace
SQL> create undo tablespace
undotbs2
datafile ‘/u02/prod/undotbs2_01.dbf’ size 30m;
datafile ‘/u02/prod/undotbs2_01.dbf’ size 30m;
# To change undo tablespace
SQL> alter system set
undo_tablespace=’UNDOTBS2’ scope=memory/spfile/both;
# To create temporary tablespace
SQL> create temporary
tablespace mytemp
tempfile ‘/u02/prod/mytemp01.dbf’ size 30m;
tempfile ‘/u02/prod/mytemp01.dbf’ size 30m;
# To add a tempfile
SQL> alter tablespace mytemp
add tempfile ‘/u02/prod/mytemp02.dbf’ size 30m;
# To resize a tempfile
SQL> alter database tempfile
‘/u02/prod/mytemp01.dbf’ resize 50m;
# To create temporary tablespace group
SQL> create temporary tablespace
mytemp
tempfile ‘/u02/prod/mytemp01.dbf’ size 30m
tablespace group grp1;
tempfile ‘/u02/prod/mytemp01.dbf’ size 30m
tablespace group grp1;
# To view tablespace group information
SQL> select * from
dba_tablespace_groups;
# To view temp tablespace information
SQL> select
file_name,sum(bytes) from dba_temp_files where tablespace_name=’MYTEMP’
group by file_name;
group by file_name;
# To move temp tablespace between groups
SQL> alter tablespace mytemp
tablespace group grp2;
MULTIPLEXING REDO LOG FILES:
1.
Redo log files are mainly used for recovering a database and also
to ensure data commit.
- If a redo log file is
lost, it will lead to data loss. To avoid this, we can maintain
multiplexed copies of redo log files in different locations. These copies
are together called as redo log group and individual files are called redo
log members.
3.
Oracle recommends to maintain a min of 2 redo log groups with min
of 2 members in each group.
4.
LGWR will write into members of same group parallely only if ASYNC
I/O is enabled at OS level.
5.
Redo log files will have 3 states – CURRENT, ACTIVE and INACTIVE.
Always these states will be changed in cyclic order.
6.
We cannot have different sizes for members in the same group,
whereas we can have different sizes for different groups, but not recommended
to implement.
7.
Default size of redo log member is 100mb in 9i and 50mb in 10g.
8.
In 8i, LOG_CHECKPOINT_INTERVAL parameter setting will specify the
time at which checkpoint should occur where as from 9i the same can be achieved
using FAST_START_MTTR_TARGET.
# To check redolog file members
SQL> select member from v$logfile;
# To check redolog group info,status and size
group by group#,members,status;
# To add a redolog file group
SQL> alter database add logfile group 4 (‘/u02/prod/redo04a.log’,’/u02/prod/redo04b.log’)
size
50m;
50m;
# To add a redolog member
SQL> alter database add logfile member ‘/u02/prod/redo01b.log’
to group 1;
# To drop a redolog group
SQL> alter database drop logfile group 4;
# To drop a redolog member
SQL> alter database drop logfile member
‘/u02/prod/redo01b.log’;
Note: Even after we drop logfile group or member, still file will exists at OS level.
Note: We cannot drop a member or a group which is in CURRENT status.
# Resuing a member
SQL> alter database add
logfile member ‘/u02/prod/redo04a.log’ reuse to group 4;
# Steps to rename (or) relocate a redolog member
SQL> shutdown immediate;
SQL> ! cp /u02/prod/redo01.log /u02/prod/redo01a.log (If
relocating, use the source and destination paths)
SQL> startup mount
SQL> alter database rename file ‘/u02/prod/redo01.log’ to
‘/u02/prod/redo01a.log’;
The above command will make server process to update the controlfile with new file name
SQL> alter database open;
Note: We cannot resize a redolog member, instead we need to create new group with required size and drop the old group.
# Handling corrupted redolog file
SQL> alter database clear logfile member ‘/u02/prod/redo01a.log’;
Or
SQL> alter database clear
unarchived logfile member ‘/u02/prod/redo01a.log’;
MULTIPLEXING OF CONTROL FILES:
1.
Control file contains crucial database information and loss of
this file will lead to loss of important data about database. So it is
recommended to have multiplexed copies of files in different locations.
2.
If control file is lost in 9i, database may go for force shutdown,
where as database will continue to run, if it is 10g version.
COMMANDS:
# Steps to multiplex control file using spfile
SQL> show parameter spfile
SQL> show parameter control_files
SQL> alter system set
control_files=’/u02/prod/control01.ctl’,’/u02/prod/control02.ctl’,’/u02/prod/control03.ctl’,’/u02/prod/control04.ctl’
scope=spfile;
Here we have added 4th control file. This addition can also be done in different location when implementing OFA
SQL> shutdown immediate
SQL> ! cp /u02/prod/control01.ctl /u02/prod/control04.ctl
SQL> startup
# Steps to multiplex controlfile using pfile
SQL> shutdown immediate
[oracle@pc1 ~] $ cd $ORACLE_HOME/dbs
[oracle@pc1 ~] $ vi initprod.ora
Edit control_files parameter and add new path to it and save the
file
[oracle@pc1 ~] cp /u02/prod/control01.ctl /u02/prod/control04.ctl
[oracle@pc1 ~] sqlplus “/ as sysdba”
SQL> startup
Note: we can create a maximum of 8 copies of controlfiles.
1.
Multiple user requests will be received by dispatcher which will
be placed in request queue.
- Shared server processes will
take information from request queue and will be processed inside the
database.
3.
The results will be placed in response queue from where dispatcher
will send them to corresponding users.
4.
Instead of PGA, statements will get executed in UGA (user global area)
in shared server architecture.
5.
Shared server architecture can be enabled by specifying following
parameters.
- DISPATCHERS
b.
MAX_DISPATCHERS
c.
SHARED_SERVER_PROCESSES
d.
MAX_SHARED_SERVER_PROCESSES
e.
CIRCUITS and MAX_CIRCUITS (optional)
6.
This architecture should be enabled only if ora-04030 or ora-04031
errors are observed frequently in alert log file.
7.
To make shared server architecture effective, SERVER=SHARED should
be mentioned in client TNSNAMES.ORA file.
8.
A single dispatcher can handle 20 user requests where as a single
shared server process can handle 16 requests concurrently.
Note: SMONn can have 16 slave processes and DBWRn
can have 20 slave processes working concurrently
Note: startup and shutdown is not possible if
sysdba connects through shared server connection
SELECT STATEMENT PROCESSING:
1.
Server process will receive the statement sent by user process on
server side and will handover that to library cache of shared pool
- The 1st phase of sql execution i.e Parsing will be
done in library cache.
3.
Then, OPTIMIZER (brain of oracle sql engine) will generate many
execution plans, but chooses the best one based on time & cost .(time –
response time, cost – cpu resource utilization)
4.
Server process will send the parsed statement with its execution
plan to PGA and 2nd phase i.e EXECUTION will be done there.
5.
After execution, server process will start searching for the data
from LRU end of LRU list and this search will continue till it founds data or
reaches MRU end. If it found data, it will be given to the user. If it didn’t
found any data, it means data is not there in database buffer cache.
6.
In such cases, server process will copy data from datafiles to MRU
end of LRU list of database buffer cache.
7.
From MRU end the rows pertaining to requested table will be
filtered and placed in SERVER RESULT CACHE along with execution plan id and
then it will be given to user (displayed on user’s console)
Note : for statements issued for the second time,
server process will get parsed tree and plan id from library cache and it will
straightly goes to server result cache and compares the plan id. If the plan id
matches, corresponding rows will be given to user. So, in this case, it is
skipping all 3 phases of SQL execution by which response time is much faster
than 10g database.
SERVER RESULT CACHE:
1.
It is new component introduced in 11g.
2.
Usage of result cache Is dependent on parameters RESULT_CACHE_MODE
and RESULT_CACHE_MAX_SIZE
3.
The possible values for RESULT_CACHE_MODE is MANUAL or FORCE. When
set to MANUAL, sql query should have hint /* result cache */. When using
FORCE all queries will use result cache.
4.
Even though after setting to FORCE, we can still avoid any query
to use result cache using hint /* no result cache */
5.
Oracle recommends to enable result cache only if database is
hitting with lot of statements which are frequently repeated. So it must be
enabled in OLTP environment.
6.
If we specify MEMORY_TARGET parameter, oracle will allocate 0.25%
of shared pool size as result cache. If we specify SGA_TARGET (which is of
10g), result cache will be 0.5% of shared pool. If we use individual parameters
(like in 9i), result cache will be of 1% size of shared pool.
7.
When any DML/DDL statements modify table data or structure, data
in result cache will become invalid and need to be processed again.
USER
MANAGEMENT:
·
User creation should be done after clearly understanding
requirement from application team.
·
Whenever we create user, we should assign a default permanent
tablespace (which allows to create tables) and default temporary
tablespace(which allows to do sorting). At any moment of time we can change
them.
·
After creating user, don’t grant connect and resource roles (in
9i). In 10g, we can grant connect role as it contains only create session
privilege.
·
Resource role internally contains unlimited tablespace privilege
and because of this, it will override the quota that is granted initially. So,
it should not be granted in real time until it is required.
·
Privileges for a user are of 2 types.
·
System level privs – eg: create table, create view etc.
·
Object level privs – eg: select on A, update on B etc.
·
A role is a set of privileges which will reduce the risk of
issuing many commands.
·
To findout roles and privileges assigned to a user, use following
views
a.
DBA_SYS_PRIVS
b.
DBA_TAB_PRIVS
c.
DBA_ROLE_PRIVS
d.
ROLE_SYS_PRIVS
e.
ROLE_TAB_PRIVS
f.
ROLE_ROLE_PRIVS
·
System level privileges can be granted with admin option so that
the grantee can grant the same privilege to other users. If we revoke the privilege
from first grantee, still others will have that privilege.
·
Object level privileges can be granted with grant option so that
grantee can grant the same privilege to other users. If we revoke the privilege
from first grantee, it will revoked from all the users.
COMMANDS:
# To create a user
SQL> create user user1 identified by user1
default tablespace mytbs
temporary tablespace temp;
# To grant permissions to user
SQL> grant create session,
create table to user1;
# To grant permissions to user with admin option
SQL> grant create table to
scott with admin option;
# To grant permissions to user with grant option
SQL> grant update on
scott.salary to user1 with grant option;
# Torevoke any permissions from user
SQL> revoke create table
from scott;
# To change password of user
SQL> alter user user1
identified by oracle;
# To allocate quota on tablespace
SQL> alter user user1 quota
10m on mydata;
Note: Allocating quota doesn’t represent reserving the space. If 2 or more users are sharing a tablespace, quota will filled up in first come first serve basis.
# To change default tablespace or temporary tablespace
SQL> alter user user1
default tablespace test;
SQL> alter user user1
default temporary tablespace mytemp;
Note: The objects created in the old tablespace remain unchanged even after changing a default tablespace for a user.
# To check default permanent & temporary tablespace for a user
SQL> select
default_tablespace,temporary_tablespace from dba_users where username=’SCOTT’;
# To lock or unlock a user
SQL> alter user scott
account lock;
SQL> alter user scott
account unlock;
# To check default permanent tablespace and temporary tablespace
SQL> select
property_name,property_value from database_properties where property_name like
‘DEFAULT%’;
# To change default permanent tablespace
SQL> alter database default
tablespace mydata;
# To change default temporary tablespace
SQL> alter database default
temporary tablespace mytemp;
# To check system privileges for a user
SQL> select privilege from
dba_sys_privs where grantee=’SCOTT’;
# To check object level privileges
SQL> select
owner,table_name,privilege from dba_tab_privs where grantee=’SCOTT’;
# To check roles assigned to a user
SQL> select granted_role
from dba_role_privs where grantee=’SCOTT’;
# To check permissions assigned to role
SQL> select privilege from
role_sys_privs where role=’MYROLE’;
SQL> select
owner,table_name,privilege from role_tab_privs where role=’MYROLE’;
SQL> select granted_role
from role_role_privs where role=’MYROLE’;
# To drop a user
SQL> drop user user1;
Or
SQL> drop user user1
cascade;
PROFILE
MANAGEMENT:
1.
Profile management is divided into
a.
Password management
b.
Resource management
Profiles are assigned to users to control access to resources and
also to provide enhanced security while logging into the database.
The following are parameters for password policy management.
.
FAILED_LOGIN_ATTEMPTS – it specifies how many times a user can
fail to login to the database.
a.
PASSWORD_LOCK_TIME – user who exceeds failed_login_attempts will
be locked. This parameter specifies till how much time it will be locked and
account will get unlocked after that time automatically. DBA can also unlock
manually.
b.
PASSWORD_LIFE_TIME – it specifies after how many days a user need
to change the password.
c.
PASSWORD_GRACE_TIME – it specified grace period for the user to
change the password. If user still fails to change .password even after grace
time, account will be locked and only DBA need to manually unlock it.
d.
PASSWORD_REUSE_TIME – this will specify after how many days user
can reuse the same password.
e.
PASSWORD_REUSE_MAX – it specifies how many max times previous
passwords can be used again.
f.
PASSWORD_VERIFY_FUNCTION – it defines rules for setting a new
password like password should be 8 char long, password should contains
alphanumeric values etc.
The following parameters used for resource management
.
SESSIONS_PER_USER – it specifies how many concurrent sessions can
be opened.
a.
IDLE_TIME – it specifies how much time a user can reside on the
database idle (without doing any work). The session will be killed if it
crosses idle_time value, but status in v$SESSION will be marked as SNIPPED.
Snipped sessions will still hold resources which is burden to OS.
b.
CONNECT_TIME – it specifies how much time user can stay in the
database.
COMMANDS:
# To create a profile
SQL> create profile
my_profile limit
failed_login_attempts 3
password_lock_time 1/24/60
sessions_per_user 1
idle_time 5;
failed_login_attempts 3
password_lock_time 1/24/60
sessions_per_user 1
idle_time 5;
# To assign a profile to user
SQL> alter user scott
profile my_profile;
# To alter a profile value
SQL> alter profile
my_profile limit sessions_per_user 3;
# To create default password verify function
SQL>
@$ORACLE_HOME/rdbms/admin/utlpwdmg.sql
Note: sessions terminated because of idle time are marked as SNIPPED in v$session and DBA need to manually kill the related OS process to clear the session.
# To kill a session
SQL> select sid,serial# from
v$session where username=’SCOTT’;
SQL> alter system kill
session ‘sid,serial#’ immediate;
Note: Resource management parameters are affective only if RESOURCE_LIMIT is set to TRUE.
Note: Resource management parameters are affective only if RESOURCE_LIMIT is set to TRUE.
# To check and change resource_limit value
SQL> show parameter
resource_limit
SQL> alter system set
resource_limit=TRUE scope=both;
Note: from 11g onwards passwords for all users are case-sensitive.
1.
It is an object used to pull remote database’s data frequently in
specified time which is called as refreshing the data using materialized views
2.
Snapshot is the object which used to do the same till 8i, but the
disadvantage is time constraint in pulling huge no.of rows
3.
MV uses MV log to store information of already transferred rows.
MVLOG will store rowid’s of table rows which helps in further refresh
4.
MV should be created in the database where we store the data abd
MVLOG will be created automatically in remote database
5.
MVLOG is a table not a file and its name always will start with
MV$LOG
6.
MV refresh can happen in following three modes
a.
Complete – pulling entire data
b.
Fast – pulling non transferred rows
c.
Force – it will do fast refresh and in case any failure, it will
go for complete refresh
When MV refresh happening very slow, check the size of table and
compare that with MVLOG size
If MVLOG size is more than table size, then drop and recreate only
MVLOG
Note: A complete refresh is required after the
recreation of MVLOG
Note: we can use “refresh fast on commit” in order
to transfer the data to remote database without waiting
# To check the table size
SQL> select
sum(bytes/1024/1024) from dba_segments where segment_name=’EMP’;
Creating new Listener manually:
1.
Copy the existing entry to end of the listener.ora file
SID_LIST_LISTENER =(SID_LIST =(SID_DESC =(SID_NAME = PLSExtProc)(ORACLE_HOME
= /u02)(PROGRAM = extproc)))
LISTENER =(DESCRIPTION_LIST =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = server1.abcd.com)(PORT = 1521))(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))))
2. Remove the line mentioned “protocol=IPC” in listener description in listener.ora
LISTENER =(DESCRIPTION_LIST =(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = server1.abcd.com)(PORT = 1521))(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))))
2. Remove the line mentioned “protocol=IPC” in listener description in listener.ora
3.
Change listener name, host and port no to appropriate values
4.
Change the listener name in the first line of SID_LIST
5.
Change SID_NAME and ORACLE_HOME and remove EXTPROC line, when
finished it should look like below
Post a Comment: