Expert Oracle Database Architecture, Third Edition (2014)

Setting Up Your Environment

In this section, I will cover how to set up an environment capable of executing the examples in this book. Specifically:

·     How to setup the EODA account used for many of the examples in this book

·     How to set up the SCOTT/TIGER demonstration schema properly

·     The environment you need to have up and running

·     Configuring AUTOTRACE, a SQL*Plus facility

·     Installing StatsPack

·     Installing and running runstats, and other custom utilities used throughout the book

·     The coding conventions I use in this book

All of the non-Oracle supplied scripts are available for download from the web site. If you download the scripts, there will be a chNN folder that contains the scripts for each chapter (where NN is the number of the chapter). The ch00 folder contains the scripts listed here in the Setting Up Your Environment section.

Setting Up the EODA Schema

The EODA user is used for most of the examples in this book. This is simply a schema that has been granted the DBA role and granted execute and select on certain objects owned by SYS:

connect / as sysdba
define username=eoda
define usernamepwd=foo
create user &&username identified by &&usernamepwd;
grant dba to &&username;
grant execute on dbms_stats to &&username;
grant select on V_$STATNAME to &&username;
grant select on V_$MYSTAT   to &&username;
grant select on V_$LATCH    to &&username;
grant select on V_$TIMER    to &&username;
conn &&username/&&usernamepwd

You can setup whatever user you want to run the examples in this book. I picked the username of EODA simply because it’s an acronym for the title of the book.

Setting Up the SCOTT/TIGER Schema

The SCOTT/TIGER schema will often already exist in your database. It is generally included during a typical installation, but it is not a mandatory component of the database. You may install the SCOTT example schema into any database account; there is nothing magic about using theSCOTT account. You could install the EMP/DEPT tables directly into your own database account if you wish.

Many of my examples in this book draw on the tables in the SCOTT schema. If you would like to be able to work along with them, you will need these tables. If you are working on a shared database, it would be advisable to install your own copy of these tables in some account other thanSCOTT to avoid side effects caused by other users mucking about with the same data.

Executing the Script

In order to create the SCOTT demonstration tables, you will simply:

·     cd $ORACLE_HOME/sqlplus/demo

·     run demobld.sql when connected as any user

Image Note  In Oracle 10g and above, you must install the demonstration subdirectories from the installation media. I have reproduced the necessary components of demobld.sql as well.

The demobld.sql script will create and populate five tables. When it is complete, it exits SQL*Plus automatically, so don’t be surprised when SQL*Plus disappears after running the script—it’s supposed to do that.

The standard demo tables do not have any referential integrity defined on them. Some of my examples rely on them having referential integrity. After you run demobld.sql, it is recommended you also execute the following:

alter table emp add constraint emp_pk primary key(empno);
alter table dept add constraint dept_pk primary key(deptno);
alter table emp add constraint emp_fk_dept foreign key(deptno) references dept;
alter table emp add constraint emp_fk_emp foreign key(mgr) references emp;

This finishes off the installation of the demonstration schema. If you would like to drop this schema at any time to clean up, you can simply execute $ORACLE_HOME/sqlplus/demo/demodrop.sql. This will drop the five tables and exit SQL*Plus.

Image Tip  You can also find the SQL to create and drop the SCOTT user in the $ORACLE_HOME/rdbms/admin/utlsampl.sql script.

Creating the Schema Without the Script

In the event you do not have access to demobld.sql, the following is sufficient to run the examples in this book:

 SAL NUMBER(7, 2),
INSERT INTO EMP VALUES (7369, 'SMITH',  'CLERK',     7902,
TO_DATE('17-DEC-1980', 'DD-MON-YYYY'),  800, NULL, 20);
TO_DATE('20-FEB-1981', 'DD-MON-YYYY'), 1600,  300, 30);
TO_DATE('22-FEB-1981', 'DD-MON-YYYY'), 1250,  500, 30);
TO_DATE('2-APR-1981', 'DD-MON-YYYY'),  2975, NULL, 20);
TO_DATE('28-SEP-1981', 'DD-MON-YYYY'), 1250, 1400, 30);
TO_DATE('1-MAY-1981', 'DD-MON-YYYY'),  2850, NULL, 30);
TO_DATE('9-JUN-1981', 'DD-MON-YYYY'),  2450, NULL, 10);
TO_DATE('09-DEC-1982', 'DD-MON-YYYY'), 3000, NULL, 20);
TO_DATE('17-NOV-1981', 'DD-MON-YYYY'), 5000, NULL, 10);
TO_DATE('8-SEP-1981', 'DD-MON-YYYY'),  1500,    0, 30);
INSERT INTO EMP VALUES (7876, 'ADAMS',  'CLERK',     7788,
TO_DATE('12-JAN-1983', 'DD-MON-YYYY'), 1100, NULL, 20);
INSERT INTO EMP VALUES (7900, 'JAMES',  'CLERK',     7698,
TO_DATE('3-DEC-1981', 'DD-MON-YYYY'),   950, NULL, 30);
TO_DATE('3-DEC-1981', 'DD-MON-YYYY'),  3000, NULL, 20);
TO_DATE('23-JAN-1982', 'DD-MON-YYYY'), 1300, NULL, 10);

If you create the schema by executing the preceding commands, do remember to go back to the previous subsection and execute the commands to create the constraints.

Setting Your Environment

Most of the examples in this book are designed to run 100 percent in the SQL*Plus environment. Other than SQL*Plus though, there is nothing else to set up and configure. I can make a suggestion, however, on using SQL*Plus. Almost all of the examples in this book use DBMS_OUTPUT in some fashion. In order for DBMS_OUTPUT to work, the following SQL*Plus command must be issued:

SQL> set serveroutput on

If you are like me, typing this in each and every time would quickly get tiresome. Fortunately, SQL*Plus allows us to setup a login.sql file, a script that is executed each and every time we start SQL*Plus. Further, it allows us to set an environment variable, SQLPATH, so that it can find this login.sql script, no matter what directory it is in.

The following is the login.sql script I use for all examples in this book:

define _editor=vi
set serveroutput on size 1000000
set trimspool on
set long 5000
set linesize 100
set pagesize 9999
column plan_plus_exp format a80
set sqlprompt '&_user.@&_connect_identifier.> '

An annotated version of this file is as follows:

·     define _editor=vi: Set up the default editor SQL*Plus would use. You may set that to be your favorite text editor (not a word processor) such as Notepad or emacs.

·     set serveroutput on size unlimited: Enable DBMS_OUTPUT to be on by default (hence we don’t have to type set serveroutput on every time). Also set the default buffer size to be as large as possible.

·     set trimspool on: When spooling text, lines will be blank-trimmed and not fixed width. If this is set off (the default), spooled lines will be as wide as your linesize setting.

·     set long 5000: Sets the default number of bytes displayed when selecting LONG and CLOB columns.

·     set linesize 100: Set the width of the lines displayed by SQL*Plus to be 100 characters.

·     set pagesize 9999: Set the pagesize, which controls how frequently SQL*Plus prints out headings, to a big number (we get one set of headings per page).

·     column plan_plus_exp format a80: This sets the default width of the explain plan output we receive with AUTOTRACE. a80 is generally wide enough to hold the full plan.

The last bit in the login.sql sets up my SQL*Plus prompt for me:

set sqlprompt '&_user.@&_connect_identifier.> '

That makes my prompt look like the following, so that I know who I am as well as where I am:


Setting Up AUTOTRACE in SQL*Plus

AUTOTRACE is a facility within SQL*Plus to show us the explain plan of the queries we’ve executed, and the resources they used. This book makes extensive use of this facility. There is more than one way to get AUTOTRACE configured.

Initial Setup

AUTOTRACE relies on a table named PLAN_TABLE being available. Starting with Oracle 10g, the SYS schema contains a global temporary table named PLAN_TABLE$. All required privileges to this table have been granted to PUBLIC and there is a public synonym (namedPLAN_TABLE that points to SYS.PLAN_TABLE$). This means any user can access this table.

Image Note  If you’re using a very old version of Oracle, you can manually create the PLAN_TABLE by executing the $ORACLE_HOME/rdbms/admin/utlxplan.sql script.

You must also create and grant the PLUSTRACE role:

·     cd $ORACLE_HOME/sqlplus/admin

·     log into SQL*Plus as SYS or as a user granted the SYSDBA privilege

·     run @plustrce


You can replace PUBLIC in the GRANT command with some user if you want.

Controlling the Report

You can automatically get a report on the execution path used by the SQL optimizer and the statement execution statistics. The report is generated after successful SQL DML (that is, SELECT, DELETE, UPDATE, MERGE, and INSERT) statements. It is useful for monitoring and tuning the performance of these statements.

You can control the report by setting the AUTOTRACE system variable.

·     SET AUTOTRACE OFF: No AUTOTRACE report is generated. This is the default.

·     SET AUTOTRACE ON EXPLAIN: The AUTOTRACE report shows only the optimizer execution path.

·     SET AUTOTRACE ON STATISTICS: The AUTOTRACE report shows only the SQL statement execution statistics.

·     SET AUTOTRACE ON: The AUTOTRACE report includes both the optimizer execution path and the SQL statement execution statistics.

·     SET AUTOTRACE TRACEONLY: Like SET AUTOTRACE ON, but suppresses the printing of the user’s query output, if any.

·     SET AUTOTRACE TRACEONLY EXPLAIN: Like SET AUTOTRACE ON, but suppresses the printing of the user’s query output (if any), and also suppresses the execution statistics.

Setting Up StatsPack

StatsPack is designed to be installed when connected as SYS (CONNECT/AS SYSDBA) or as a user granted the SYSDBA privilege. In many installations, installing StatsPack will be a task that you must ask the DBA or administrators to perform.

Installing StatsPack is trivial. You simply run @spcreate.sql. This script will be found in $ORACLE_HOME/rdbms/admin and should be executed when connected as SYS via SQL*Plus.

You’ll need to know the following three pieces of information before running the spcreate.sql script:

·     The password you would like to use for the PERFSTAT schema that will be created

·     The default tablespace you would like to use for PERFSTAT

·     The temporary tablespace you would like to use for PERFSTAT

Running the script will look something like this:

$ sqlplus / as sysdba
SQL*Plus: Release Production on Fri May 23 15:45:05 2014
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SYS@ORA12CR1> @spcreate
Choose the PERFSTAT user's password
Not specifying a password will result in the installation FAILING
Enter value for perfstat_password:
… <output omitted for brevity> …

The script will prompt you for the needed information as it executes. In the event you make a typo or inadvertently cancel the installation, you should use spdrop.sql found in $ORACLE_HOME/rdbms/admin to remove the user and installed views prior to attempting another install of StatsPack. The StatsPack installation will create a file called spcpkg.lis. You should review this file for any possible errors that might have occurred. The user, views, and PL/SQL code should install cleanly, however, as long as you supplied valid tablespace names (and didn’t already have a user PERFSTAT).

Image Tip  StatsPack is documented in the following text file: $ORACLE_HOME/rdbms/admin/spdoc.txt.

Custom Scripts

In this section, I will describe the requirements (if any) needed by various scripts used throughout this book. As well, we will investigate the code behind the scripts.


Runstats is a tool I developed to compare two different methods of doing the same thing and show which one is superior. You supply the two different methods and Runstats does the rest. Runstats simply measures three key things:

·     Wall clock or elapsed time: This is useful to know, but not the most important piece of information.

·     System statistics: This shows, side by side, how many times each approach did something (such as a parse call, for example) and the difference between the two.

·     Latching: This is the key output of this report.

As we’ll see in this book, latches are a type of lightweight lock. Locks are serialization devices. Serialization devices inhibit concurrency. Applications that inhibit concurrency are less scalable, can support fewer users, and require more resources. Our goal is always to build applications that have the potential to scale—ones that can service one user as well as 1,000 or 10,000. The less latching we incur in our approaches, the better off we will be. I might choose an approach that takes longer to run on the wall clock but that uses 10 percent of the latches. I know that the approach that uses fewer latches will scale substantially better than the approach that uses more latches.

Runstats is best used in isolation; that is, on a single-user database. We will be measuring statistics and latching (locking) activity that result from our approaches. We do not want other sessions to contribute to the system’s load or latching while this is going on. A small test database is perfect for these sorts of tests. I frequently use my desktop PC or laptop, for example.

Image Note  I believe all developers should have a test bed database they control to try ideas on, without needing to ask a DBA to do something all of the time. Developers definitely should have a database on their desktop, given that the licensing for the personal developer version is simply “use it to develop and test with, do not deploy, and you can just have it.” This way, there is nothing to lose! Also, I’ve taken some informal polls at conferences and seminars. Virtually every DBA out there started as a developer! The experience and training developers could get by having their own database—being able to see how it really works—pays dividends in the long run.

In order to use Runstats, you need to set up access to several V$ views, create a table to hold the statistics, and create the Runstats package. You will need access to four V$ tables (those magic, dynamic performance tables): V$STATNAME, V$MYSTAT, V$TIMER and V$LATCH. Here is a view I use:

create or replace view stats
as select 'STAT...' || name, b.value
      from v$statname a, v$mystat b
     where a.statistic# = b.statistic#
    union all
    select 'LATCH.' || name,  gets
      from v$latch
    union all
    select 'STAT...Elapsed Time', hsecs from v$timer;

Image Note  The actual object names you need to be granted access to will be V_$STATNAME, V_$MYSTAT, and so on—that is, the object name to use in the grant will start with V_$ not V$.  The V$ name is a synonym that points to the underlying view with a name that starts with V_$. So,V$STATNAME is a synonym that points to V_$STATNAME – a view.  You need to be granted access to the view.

You can either have SELECT on V$STATNAME, V$MYSTAT, V$TIMER, and V$LATCH granted directly to you (so you can create the view yourself) or you can have someone that does have SELECT on those objects create the view for you and grant SELECT privileges on the view to you.

Once you have that set up, all you need is a small table to collect the statistics:

create global temporary table run_stats
( runid varchar2(15),
  name varchar2(80),
  value int )
on commit preserve rows;

Last, you need to create the package that is Runstats. It contains three simple API calls:

·     RS_START (Runstats Start) to be called at the beginning of a Runstats test

·     RS_MIDDLE to be called in the middle, as you might have guessed

·     RS_STOP to finish off and print the report

The specification is as follows:

EODA@ORA12CR1> create or replace package runstats_pkg
  2  as
  3      procedure rs_start;
  4      procedure rs_middle;
  5      procedure rs_stop( p_difference_threshold in number default 0 );
  6  end;
  7  /
Package created.

The parameter, p_difference_threshold, is used to control the amount of data printed at the end. Runstats collects statistics and latching information for each run, and then prints a report of how much of a resource each test (each approach) used and the difference between them. You can use this input parameter to see only the statistics and latches that had a difference greater than this number. By default, this is zero, and you see all of the outputs.

Next, we’ll look at the package body procedure by procedure. The package begins with some global variables. These will be used to record the elapsed times for our runs:

EODA@ORA12CR1> create or replace package body runstats_pkg
  2  as
  4  g_start number;
  5  g_run1 number;
  6  g_run2 number;

Next is the RS_START routine. This will simply clear out our statistics holding table and then populate it with the “before” statistics and latches. It will then capture the current timer value, a clock of sorts that we can use to compute elapsed times in hundredths of seconds:

  8  procedure rs_start
  9  is
 10  begin
 11    delete from run_stats;
 13    insert into run_stats
 14    select 'before', stats.* from stats;
 16    g_start := dbms_utility.get_cpu_time;
 17  end;

Next is the RS_MIDDLE routine. This procedure simply records the elapsed time for the first run of our test in G_RUN1. Then it inserts the current set of statistics and latches. If we were to subtract these values from the ones we saved previously in RS_START, we could discover how many latches the first method used, how many cursors (a statistic) it used, and so on.

Last, it records the start time for our next run:

19  procedure rs_middle
 20  is
 21  begin
 22    g_run1 := (dbms_utility.get_cpu_time-g_start);
 24    insert into run_stats
 25    select 'after 1', stats.* from stats;
 27    g_start := dbms_utility.get_cpu_time;
 28  end;

The next and final routine in this package is the RS_STOP routine. Its job is to print out the aggregate CPU times for each run and then print out the difference between the statistic/latching values for each of the two runs (only printing out those that exceed the threshold):

 30  procedure rs_stop(p_difference_threshold in number default 0)
 31  is
 32  begin
 33    g_run2 := (dbms_utility.get_cpu_time-g_start);
 35    dbms_output.put_line( 'Run1 ran in ' || g_run1 || ' cpu hsecs' );
 36    dbms_output.put_line( 'Run2 ran in ' || g_run2 || ' cpu hsecs' );
 38    if ( g_run2 <> 0 )
 39    then
 40      dbms_output.put_line
 41      ( 'run 1 ran in ' || round(g_run1/g_run2*100,2) ||
 42      '% of the time' );
 43    end if;
 44    dbms_output.put_line( chr(9) );
 46    insert into run_stats
 47    select 'after 2', stats.* from stats;
 49    dbms_output.put_line
 50    ( rpad( 'Name', 30 ) || lpad( 'Run1', 16 ) ||
 51    lpad( 'Run2', 16 ) || lpad( 'Diff', 16 ) );
 53    for x in
 54    ( select rpad(, 30 ) ||
 55      to_char( b.value-a.value, '999,999,999,999' ) ||
 56      to_char( c.value-b.value, '999,999,999,999' ) ||
 57      to_char( ( (c.value-b.value)-(b.value-a.value)),
 58      '999,999,999,999' ) data
 59      from run_stats a, run_stats b, run_stats c
 60      where =
 61      and =
 62      and a.runid = 'before'
 63      and b.runid = 'after 1'
 64      and c.runid = 'after 2'
 66      and abs( (c.value-b.value) - (b.value-a.value) )
 67      > p_difference_threshold
 68      order by abs( (c.value-b.value)-(b.value-a.value))
 69    ) loop
 70    dbms_output.put_line( );
 71    end loop;
 73    dbms_output.put_line( chr(9) );
 74    dbms_output.put_line
 75    ( 'Run1 latches total versus runs -- difference and pct' );
 76    dbms_output.put_line
 77    ( lpad( 'Run1', 14 ) || lpad( 'Run2', 19 ) ||
 78      lpad( 'Diff', 18 ) || lpad( 'Pct', 11 ) );
 80    for x in
 81    ( select to_char( run1, '9,999,999,999,999' ) ||
 82      to_char( run2, '9,999,999,999,999' ) ||
 83      to_char( diff, '9,999,999,999,999' ) ||
 84      to_char( round( run1/decode( run2, 0, to_number(0), run2) *100,2 ), '99,999.99' ) || '%' data
 85      from ( select sum(b.value-a.value) run1, sum(c.value-b.value) run2,
 86      sum( (c.value-b.value)-(b.value-a.value)) diff
 87      from run_stats a, run_stats b, run_stats c
 88      where =
 89      and =
 90      and a.runid = 'before'
 91      and b.runid = 'after 1'
 92      and c.runid = 'after 2'
 93      and like 'LATCH%'
 94      )
 95    ) loop
 96    dbms_output.put_line( );
 97    end loop;
 98  end;
100  end;
101  /
Package body created.

Now you are ready to use Runstats. By way of example, we’ll demonstrate how to use Runstats to see which is more efficient, a single bulk INSERT versus row-by-row processing. We’ll start by setting up two tables into which we’ll insert 1,000,000 rows (the BIG_TABLE table creation script is provided later in this section):

EODA@ORA12CR1> create table t1
  2  as
  3  select * from big_table
  4  where 1=0;
Table created.
EODA@ORA12CR1> create table t2
  2  as
  3  select * from big_table
  4  where 1=0;
Table created.

And now we are ready to perform the first method of inserting the records, using a single SQL statement. We start by calling RUNSTATS_PKG.RS_START:

EODA@ORA12CR1> exec runstats_pkg.rs_start;
PL/SQL procedure successfully completed.
EODA@ORA12CR1> insert into t1
  2  select *
  3    from big_table
  4   where rownum <= 1000000;
1000000 rows created.
EODA@ORA12CR1> commit;
Commit complete.

Now we are ready to perform the second method, row-by-row insertion of data:

EODA@ORA12CR1> exec runstats_pkg.rs_middle;
PL/SQL procedure successfully completed.
EODA@ORA12CR1> begin
  2          for x in ( select *
  3                       from big_table
  4                      where rownum <= 1000000 )
  5          loop
  6                  insert into t2 values X;
  7          end loop;
  8          commit;
  9  end;
 10  /
PL/SQL procedure successfully completed.

And finally, we’ll generate the report:

EODA@ORA12CR1> exec runstats_pkg.rs_stop(1000000)
Run1 ran in 119 cpu hsecs
Run2 ran in 3376 cpu hsecs
run 1 ran in 3.52% of the time
Name                                      Run1            Run2            Diff
STAT…execute count                        29       1,000,032       1,000,003
STAT…opened cursors cumulati              29       1,000,035       1,000,006
LATCH.shared pool                          582       1,001,466       1,000,884
STAT…session logical reads           148,818       1,158,009       1,009,191
STAT…recursive calls                     183       1,010,218       1,010,035
STAT…db block changes                 95,964       2,074,283       1,978,319
LATCH.cache buffers chains             443,882       5,462,356       5,018,474
STAT…undo change vector size       3,620,400      67,938,496      64,318,096
STAT…KTFB alloc space (block     109,051,904     176,160,768      67,108,864
STAT…redo size                   105,698,540     384,717,388     279,018,848
STAT…logical read bytes from   1,114,251,264   9,300,803,584   8,186,552,320
Run1 latches total versus runs -- difference and pct
Run1               Run2              Diff        Pct
555,593         6,795,317         6,239,724      8.18%
PL/SQL procedure successfully completed.

This confirms you have the RUNSTATS_PKG package installed and shows you why you should use a single SQL statement instead of a bunch of procedural code when developing applications whenever possible!


The mystat.sql and its companion, mystat2.sql, are used to show the increase in some Oracle “statistic” before and after some operation. The mystat.sql script captures the begin value of some statistic:

set echo off
set verify off
column value new_val V
define S="&1"
set autotrace off
select, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower( = lower('&S')
set echo on

And mystat2.sql reports the difference (&V is populated by running the first script, mystat.sql—it uses the SQL*Plus NEW_VAL feature for that. It contains the last VALUE selected from the preceding query):

set echo off
set verify off
select, b.value V, to_char(b.value-&V,'999,999,999,999') diff
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower( = lower('&S')
set echo on

For example, to see how much redo is generated by an UPDATE statement, we can do the following:

EODA@ORA12CR1> @mystat "redo size"
EODA@ORA12CR1> set echo off
NAME                                VALUE
------------------------------ ----------
redo size                       491167892
EODA@ORA12CR1> update big_table set owner = lower(owner)
  2  where rownum <= 1000;

1000 rows updated.

EODA@ORA12CR1> @mystat2
EODA@ORA12CR1> set echo off

NAME                                    V DIFF
------------------------------ ---------- ----------------
redo size                       491265640           97,748

This shows our UPDATE of 1,000 rows generated 97,748 bytes of redo.


The SHOW_SPACE routine prints detailed space utilization information for database segments. Here is the interface to it:

EODA@ORA12CR1> desc show_space
PROCEDURE show_space
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 P_SEGNAME                      VARCHAR2                IN
 P_OWNER                        VARCHAR2                IN     DEFAULT
 P_TYPE                         VARCHAR2                IN     DEFAULT
 P_PARTITION                    VARCHAR2                IN     DEFAULT

The arguments are as follows:

·     P_SEGNAME: Name of the segment—the table or index name, for example.

·     P_OWNER: Defaults to the current user, but you can use this routine to look at some other schema.

·     P_TYPE: Defaults to TABLE and represents the type of object you are looking at. For example, select distinct segment_type from dba_segments lists valid segment types.

·     P_PARTITION: Name of the partition when you show the space for a partitioned object. SHOW_SPACE shows space for only a partition at a time.

The output of this routine looks as follows, when the segment resides in an Automatic Segment Space Management (ASSM) tablespace:

EODA@ORA12CR1> exec show_space('BIG_TABLE');
Unformatted Blocks .....................               0
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               0
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................               0
Full Blocks        .....................          14,469
Total Blocks............................          15,360
Total Bytes.............................     125,829,120
Total MBytes............................             120
Unused Blocks...........................             728
Unused Bytes............................       5,963,776
Last Used Ext FileId....................               4
Last Used Ext BlockId...................          43,145
Last Used Block.........................             296

PL/SQL procedure successfully completed.

The items reported are as follows:

·     Unformatted Blocks: The number of blocks that are allocated to the table below the high-water mark, but have not been used. Add unformatted and unused blocks together to get a total count of blocks allocated to the table but never used to hold data in an ASSM object.

·     FS1 Blocks-FS4 Blocks: Formatted blocks with data. The ranges of numbers after their name represent the emptiness of each block. For example, (0-25) is the count of blocks that are between 0 and 25 percent empty.

·     Full Blocks: The number of blocks that are so full that they are no longer candidates for future inserts.

·     Total Blocks, Total Bytes, Total Mbytes: The total amount of space allocated to the segment measured in database blocks, bytes, and megabytes.

·     Unused Blocks, Unused Bytes: Represents a portion of the amount of space never used. These are blocks allocated to the segment, but are currently above the high-water mark of the segment.

·     Last Used Ext FileId: The file ID of the file that contains the last extent that contains data.

·     Last Used Ext BlockId: The block ID of the beginning of the last extent; the block ID within the last-used file.

·     Last Used Block: The block ID offset of the last block used in the last extent.

When you use SHOW_SPACE to look at objects in Manual Segment Space Managed tablespaces, the output resembles this:

EODA@ORA12CR1> exec show_space( 'BIG_TABLE' )
Free Blocks.............................               1
Total Blocks............................         147,456
Total Bytes.............................   1,207,959,552
Total MBytes............................           1,152
Unused Blocks...........................           1,616
Unused Bytes............................      13,238,272
Last Used Ext FileId....................               7
Last Used Ext BlockId...................         139,273
Last Used Block.........................           6,576

PL/SQL procedure successfully completed.

The only difference is the Free Blocks item at the beginning of the report. This is a count of the blocks in the first freelist group of the segment. My script reports only on this freelist group. You would need to modify the script to accommodate multiple freelist groups.

The commented code follows. This utility is a simple layer on top of the DBMS_SPACE API in the database.

create or replace procedure show_space
( p_segname in varchar2,
  p_owner   in varchar2 default user,
  p_type    in varchar2 default 'TABLE',
  p_partition in varchar2 default NULL )
-- this procedure uses authid current user so it can query DBA_*
-- views using privileges from a ROLE and so it can be installed
-- once per database, instead of once per user that wants to use it
authid current_user
    l_free_blks                 number;
    l_total_blocks              number;
    l_total_bytes               number;
    l_unused_blocks             number;
    l_unused_bytes              number;
    l_LastUsedExtFileId         number;
    l_LastUsedExtBlockId        number;
    l_LAST_USED_BLOCK           number;
    l_segment_space_mgmt        varchar2(255);
    l_unformatted_blocks number;
    l_unformatted_bytes number;
    l_fs1_blocks number; l_fs1_bytes number;
    l_fs2_blocks number; l_fs2_bytes number;
    l_fs3_blocks number; l_fs3_bytes number;
    l_fs4_blocks number; l_fs4_bytes number;
    l_full_blocks number; l_full_bytes number;
    -- inline procedure to print out numbers nicely formatted
    -- with a simple label
    procedure p( p_label in varchar2, p_num in number )
        dbms_output.put_line( rpad(p_label,40,'.') ||
                              to_char(p_num,'999,999,999,999') );
   -- this query is executed dynamically in order to allow this procedure
   -- to be created by a user who has access to DBA_SEGMENTS/TABLESPACES
   -- via a role as is customary.
   -- NOTE: at runtime, the invoker MUST have access to these two
   -- views!
   -- this query determines if the object is an ASSM object or not
      execute immediate
          'select ts.segment_space_management
             from dba_segments seg, dba_tablespaces ts
            where seg.segment_name      = :p_segname
              and (:p_partition is null or
                  seg.partition_name = :p_partition)
              and seg.owner = :p_owner
              and seg.tablespace_name = ts.tablespace_name'
             into l_segment_space_mgmt
            using p_segname, p_partition, p_partition, p_owner;
       when too_many_rows then
          ( 'This must be a partitioned table, use p_partition => ');
   -- if the object is in an ASSM tablespace, we must use this API
   -- call to get space information, else we use the FREE_BLOCKS
   -- API for the user managed segments
   if l_segment_space_mgmt = 'AUTO'
     ( p_owner, p_segname, p_type, l_unformatted_blocks,
       l_unformatted_bytes, l_fs1_blocks, l_fs1_bytes,
       l_fs2_blocks, l_fs2_bytes, l_fs3_blocks, l_fs3_bytes,
       l_fs4_blocks, l_fs4_bytes, l_full_blocks, l_full_bytes, p_partition);
     p( 'Unformatted Blocks ', l_unformatted_blocks );
     p( 'FS1 Blocks (0-25)  ', l_fs1_blocks );
     p( 'FS2 Blocks (25-50) ', l_fs2_blocks );
     p( 'FS3 Blocks (50-75) ', l_fs3_blocks );
     p( 'FS4 Blocks (75-100)', l_fs4_blocks );
     p( 'Full Blocks        ', l_full_blocks );
       segment_owner     => p_owner,
       segment_name      => p_segname,
       segment_type      => p_type,
       freelist_group_id => 0,
       free_blks         => l_free_blks);
     p( 'Free Blocks', l_free_blks );
  end if;
  -- and then the unused space API call to get the rest of the
  -- information
  ( segment_owner     => p_owner,
    segment_name      => p_segname,
    segment_type      => p_type,
    partition_name    => p_partition,
    total_blocks      => l_total_blocks,
    total_bytes       => l_total_bytes,
    unused_blocks     => l_unused_blocks,
    unused_bytes      => l_unused_bytes,
    LAST_USED_EXTENT_FILE_ID => l_LastUsedExtFileId,
    LAST_USED_EXTENT_BLOCK_ID => l_LastUsedExtBlockId,
    p( 'Total Blocks', l_total_blocks );
    p( 'Total Bytes', l_total_bytes );
    p( 'Total MBytes', trunc(l_total_bytes/1024/1024) );
    p( 'Unused Blocks', l_unused_blocks );
    p( 'Unused Bytes', l_unused_bytes );
    p( 'Last Used Ext FileId', l_LastUsedExtFileId );
    p( 'Last Used Ext BlockId', l_LastUsedExtBlockId );
    p( 'Last Used Block', l_LAST_USED_BLOCK );


For examples throughout this book, I use a table called BIG_TABLE. Depending on which system I use, this table has between one record and four million records and varies in size from 200MB to 800MB. In all cases, the table structure is the same.

To create BIG_TABLE, I wrote a script that does the following:

·     Creates an empty table based on ALL_OBJECTS. This dictionary view is used to populate the BIG_TABLE.

·     Makes this table NOLOGGING. This is optional. I did it for performance. Using NOLOGGING mode for a test table is safe; you won’t use it in a production system, so features like Oracle Data Guard will not be enabled.

·     Populates the table by seeding it with the contents of ALL_OBJECTS and then iteratively inserting into itself, approximately doubling its size on each iteration.

·     Creates a primary key constraint on the table.

·     Gathers statistics.

To build the BIG_TABLE table, you can run the following script at the SQL*Plus prompt and pass in the number of rows you want in the table. The script will stop when it hits that number of rows.

create table big_table
  from all_objects
 where 1=0
alter table big_table nologging;
  l_cnt number;
  l_rows number := &numrows;
  insert /*+ append */
  into big_table
  from all_objects
  where rownum <= &numrows;
  l_cnt := sql%rowcount;
  while (l_cnt < l_rows)
    insert /*+ APPEND */ into big_table
    from big_table a
    where rownum <= l_rows-l_cnt;
    l_cnt := l_cnt + sql%rowcount;
  end loop;
alter table big_table add constraint
big_table_pk primary key(id);
exec dbms_stats.gather_table_stats( user, 'BIG_TABLE', estimate_percent=> 1);

I estimated baseline statistics on the table. The index associated with the primary key will have statistics computed automatically when it is created.

Coding Conventions

The one coding convention I use in this book that I would like to point out is how I name variables in PL/SQL code. For example, consider a package body like this:

create or replace package body my_pkg
   g_variable varchar2(25);
   procedure p( p_variable in varchar2 )
      l_variable varchar2(25);

Here I have three variables: a global package variable, G_VARIABLE; a formal parameter to the procedure, P_VARIABLE; and a local variable, L_VARIABLE. I name my variables after the scope they are contained in. All globals begin with G_, parameters with P_, and local variables with L_. The main reason for this is to distinguish PL/SQL variables from columns in a database table. For example, a procedure such as the following would always print out every row in the EMP table where ENAME is not null:

create procedure p( ENAME in varchar2 )
   for x in ( select * from emp where ename = ENAME ) loop
      Dbms_output.put_line( x.empno );
   end loop;

SQL sees ename = ENAME, and compares the ENAME column to itself (of course). We could use ename = P.ENAME; that is, qualify the reference to the PL/SQL variable with the procedure name, but this is too easy to forget, leading to errors.

I just always name my variables after the scope. That way, I can easily distinguish parameters from local variables and global variables, in addition to removing any ambiguity with respect to column names and variable names.