Using Google Chrome?

Download my
free Chrome Extension, Power Notes Searcher, to make searching for and evaluating SAP notes, much easier.

Recent Posts

Tuesday, August 30, 2011

Lock/Unlock OEM Grid Control Accounts From Excel Macro


As a complimentary post to my previous blog post on Reporting All Oracle User Accounts Through OEM Grid Control, I have decided to post the code for an Excel VBA macro which uses three buttons to call the specific OEM Grid Control URLs that will enable you Lock/UnLock or Edit an Oracle user account using the OEM Grid Control user interface, but from a link on an Excel spreadsheet.

This sounds confusing so maybe I should explain the reason behind this.
I needed to know about all Oracle user accounts throughout the Oracle landscape so that I could compare them to a dump of Active Directory accounts.
The AD accounts get locked when people leave, but the leavers process didn't always include the DBA in the 'cc list.
So, armed with the Excel spreadsheet of AD users, I needed to marry the list to the Oracle accounts and then go and lock the Oracle accounts where the AD account had been locked/expired.

Sound OK so far?
Good.  So once I've got the Excel sheet with the ~200 Oracle accounts I need to lock, here's how it could be done using OEM Grid Control (you can give the spreadsheet to your 1st line support people, if they have OEM access).

Three buttons on the spreadsheet:  Lock, UnLock and Edit.
Just paste the code into the sheet's VBA section, then change the text "<your_oem_server>" and add the three buttons and hook them to the sub routines.
I should mention that this was an Oracle Enterprise Manager Grid Control 10g implementation.

Enjoy.

Private Sub CommandButton1_Click()
' Call to open IE with the URL to lock the user based on the data on the current sheet on the current selected row.

Dim str_URLpart1 As String
Dim str_URLpart2 As String
Dim str_URLpart3 As String
Dim str_URLpart4 As String
Dim str_URLpart5 As String
Dim str_complete_URL As String
Dim str_sys As String
Dim str_user As String
Dim IE As Object

str_URLpart1 = "http://<your_oem_server>/em/console/database/security/user?oname="
str_URLpart2 = "&amp;event=lockUser&amp;cancelURL=%2Fem%2Fconsole%2Fdatabase%2FdatabaseObjectsSearch%3Fevent%3Dredisplay%26target%3D"
str_URLpart3 = "%26type%3Doracle_database%26objectType%3DUSER%26otype%3DUSER&amp;backURL=%2Fem%2Fconsole%2Fdatabase%2FdatabaseObjectsSearch%3Fevent%3Dredisplay%26target%3D"
str_URLpart4 = "%26type%3Doracle_database%26objectType%3DUSER%26otype%3DUSER&amp;otype=USER&amp;target="
str_URLpart5 = "&amp;type=oracle_database"
str_user = Range("D" &amp; ActiveCell.Row).Value
str_sys = Range("A" &amp; ActiveCell.Row).Value
str_complete_URL = str_URLpart1 + str_user + str_URLpart2 + str_sys + str_URLpart3 + str_sys + str_URLpart4 + str_sys + str_URLpart5

' Create InternetExplorer Object
Set IE = CreateObject("InternetExplorer.Application")

' You can uncoment Next line To see form results
IE.Visible = True

' Send the form data To URL As POST binary request
IE.Navigate str_complete_URL

' Statusbar
Application.StatusBar = "URL is loading. Please wait..."

' Wait while IE loading...
Do While IE.Busy
Application.Wait DateAdd("s", 1, Now)
Loop

Application.StatusBar = ""

End Sub


Private Sub CommandButton2_Click()

' Call to open IE with the URL to unlock the user based on the data on the current sheet on the current selected row.

Dim str_URLpart1 As String
Dim str_URLpart2 As String
Dim str_URLpart3 As String
Dim str_URLpart4 As String
Dim str_URLpart5 As String
Dim str_complete_URL As String
Dim str_sys As String
Dim str_user As String
Dim IE As Object

str_URLpart1 = "http://<your_oem_server>/em/console/database/security/user?oname="
str_URLpart2 = "&amp;event=unlockUser&amp;cancelURL=%2Fem%2Fconsole%2Fdatabase%2FdatabaseObjectsSearch%3Fevent%3Dredisplay%26target%3D"
str_URLpart3 = "%26type%3Doracle_database%26objectType%3DUSER%26otype%3DUSER&amp;backURL=%2Fem%2Fconsole%2Fdatabase%2FdatabaseObjectsSearch%3Fevent%3Dredisplay%26target%3D"
str_URLpart4 = "%26type%3Doracle_database%26objectType%3DUSER%26otype%3DUSER&amp;otype=USER&amp;target="
str_URLpart5 = "&amp;type=oracle_database"
str_user = Range("D" &amp; ActiveCell.Row).Value
str_sys = Range("A" &amp; ActiveCell.Row).Value
str_complete_URL = str_URLpart1 + str_user + str_URLpart2 + str_sys + str_URLpart3 + str_sys + str_URLpart4 + str_sys + str_URLpart5

' Create InternetExplorer Object
Set IE = CreateObject("InternetExplorer.Application")

' You can uncoment Next line To see form results
IE.Visible = True

' Send the form data To URL As POST binary request
IE.Navigate str_complete_URL

' Statusbar
Application.StatusBar = "URL is loading. Please wait..."

' Wait while IE loading...
Do While IE.Busy
Application.Wait DateAdd("s", 1, Now)
Loop

Application.StatusBar = ""

End Sub


Private Sub CommandButton3_Click()

' Call to open IE with the URL to edit the user based on the data on the current sheet on the current selected row.

Dim str_URLpart1 As String
Dim str_URLpart2 As String
Dim str_URLpart3 As String
Dim str_URLpart4 As String
Dim str_URLpart5 As String
Dim str_complete_URL As String
Dim str_sys As String
Dim str_user As String
Dim IE As Object

str_URLpart1 = "http://<your_oem_server>/em/console/database/security/user?oname="
str_URLpart2 = "&amp;event=edit&amp;cancelURL=%2Fem%2Fconsole%2Fdatabase%2FdatabaseObjectsSearch%3Fevent%3Dredisplay%26target%3D"
str_URLpart3 = "%26type%3Doracle_database%26objectType%3DUSER%26otype%3DUSER&amp;backURL=%2Fem%2Fconsole%2Fdatabase%2FdatabaseObjectsSearch%3Fevent%3Dredisplay%26target%3D"
str_URLpart4 = "%26type%3Doracle_database%26objectType%3DUSER%26otype%3DUSER&amp;otype=USER&amp;target="
str_URLpart5 = "&amp;type=oracle_database"
str_user = Range("D" &amp; ActiveCell.Row).Value
str_sys = Range("A" &amp; ActiveCell.Row).Value
str_complete_URL = str_URLpart1 + str_user + str_URLpart2 + str_sys + str_URLpart3 + str_sys + str_URLpart4 + str_sys + str_URLpart5

' Create InternetExplorer Object
Set IE = CreateObject("InternetExplorer.Application")

' You can uncoment Next line To see form results
IE.Visible = True

' Send the form data To URL As POST binary request
IE.Navigate str_complete_URL

' Statusbar
Application.StatusBar = "URL is loading. Please wait..."

' Wait while IE loading...
Do While IE.Busy
Application.Wait DateAdd("s", 1, Now)
Loop

Application.StatusBar = ""

End Sub

Thursday, August 25, 2011

Disaster Recovery - Where To Start

Perhaps IT implementors think that Disaster Recovery is a dark art, or maybe it's just another item to de-scope from the implementation project due to time restrictions or budget cuts.
But I find that it's coming up more and more as part of the yearly financial system audits (the ones we love to hate!).
Not only is it an action on the main financial systems, but the auditors are training there sights on the ancillary systems too.

So where do I normally start?
Usually it's not a case of where I would like to start, it's where I *have* to start due to missing system documentation and configuration information.
Being a DBA, I'm responsible for ensuring the continuity of business with respect to the database system and the preservation of the data.
So no matter what previous work has been completed to provide a DR capability, or a HA capability (so often confused) it's useless without any documentation or analysis to show what the tolerances are (how much data can you afford to lose - RPO, and how much time it will take to get a system back, RTO).

First things first, ensure that the business recognises that DR is not just an IT task.
You should point them at the host of web pages devoted to creating the perfect Business Continuity Plan (BCP).
It states that a BCP should include what *the business* intends to do in a disaster.  This more often than not involves a slight re-work to the standard business processes.  Whether this involves going to a paper based temporary system or just using a cut-down process, the business needs a plan.
It's not all about Recovery Point Objective and Recovery Time Objective!
A nice "swim lane" diagram will help a lot when picturing the business process.
You should start at the beginning of your business process (normally someone wants to buy something), then work through to the end (your company gets paid).  Add all the steps in between and overlay the systems used.  Imagine if those systems were not present, how could you work around them.

Secondly, you need to evaluate the risks among the systems you support.  Identify points of failure.
To be able to do this effectively, you will need to know your systems inside out.
You should be aware of *all* dependencies (e.g. interfaces) especially with the more complex business systems like ERP, BPM, BI, middleware etc.
Start by creating a baseline of your existing configuration:
- What applications do you have.
- What versions are they.
- How are they linked (interfaces).
- Who uses them (business areas).
- What type of business data do they store (do they store data at all).

Third, you should make yourself aware of the hardware capabilities of important things like your storage sub-system (SAN, NAS etc).  Can you leverage these devices to make yourself better protected/prepared.
An overall architecture diagram speaks a thousand words.

Once you understand what you have got, only then can you start to formulate a plan for the worst.
Decide what "the worst" actually is!  I've worked for companies that are convinced an aeroplane would decimate the data centre at any moment.  All the prep-work in the world couldn't prevent it from happening, but maybe you can save the data (if not yourself ;-) ).

Monday, August 22, 2011

Virtualisation Conundrum

Whilst researching ideas for using my VMware ESXi test rig, I came across this blog site: http://weinshenker.net/blog/2011/07/26/oracle-redhat-vmware/
I've added it to my Fav's as a *must read* once in a while.
It's packed with some very interesting articles concerning Linux, Virtualisation and more importantly, Oracle.

It may have changed my mind about using Oracle Enterprise Linux.  I'm considering shifting up the Linux tree to RedHat, or experimenting with Fedora.

See the Wikipedia tree diagram http://upload.wikimedia.org/wikipedia/commons/8/8c/Gldt.svg on the right of the Wikipedia Linux Distro page.

Saturday, August 13, 2011

Clone a VMware ESXi Virtual Machine from the Command Line

My version of VMware vSphere is obviously not the full "knock your socks off" version and prevents me from right clicking and cloning an existing VM.
Not to be downhearted, there is another way...

Pre-requisites:
- VMware vSpehere client.
- Access to the VMware ESXi server console & keyboard or an SSH client (like PUTTY for Windows).
- The VMware ESXi root user's password.

Process:
- Create the new VM using vSphere client.
- Give it a very small hard disk.
- Remove the hard disk from the VM once created.

- Enable local tech support mode from VMware *console* screen, press ALT+F1 and log in as "unsupported" with the root password.
or
- Enable remote tech support mode from VMware *console* screen, then use an SSH client to log into the server as root.

- Once you have a command line, locate the source virtual machine's datastore directory under /vmfs/volumes/<datastorename>:
# cd /vmfs/volumes/ds1/source_vm

- Change to the new target VM datastore directory:
# cd /vmfs/volumes/ds1/target_vm

- Now use vmkfstools to clone the disk (note the use of thin-provisioning, you don't have to):
# vmkfstools -i "/vmfs/volumes/ds1/source_vm/source_file.vmdk" "/vmfs/volumes/ds1/target_vm/target_file.vmdk" -d thin

- Back in the vSphere client, add the new disk to the target virtual machine and boot it.

- Bob is your uncle.

Friday, August 12, 2011

Use EMDIAG REPVFY to fix OEM issues

EMDIAG is a diagnostics utility specifically for the Oracle Enterprise Manager system.
It is useful to diagnose issues with the repository of OEM as it doesn’t need the OMS to be running to use it.

The Oracle document 763072.1 (Oracle Enterprise Manager Grid Control Release Notes for Linux and Windows 10g Release 5) suggests upgrading the EMDIAG kit before performing a general OEM upgrade (it's in the OEM pre-upgrade tasks list).

Follow the steps in document 421499.1 to install the EMDIAG kit into the C:\ directory (you should see the old one already installed there).

Follow the process in the document for upgrading the EMDIAG kit and check the Windows environment variable for the EMDIAG_HOME value.

Once installed, use the “repvfy -version” command in the %EMDIAG_HOME%\bin directory to verify the current installed OEM software.

To use EMDIAG, set your OracleHome from the command line:

set ORACLE_HOME=C:\OracleHomes\oms10g

Change to the EMDIAG_HOME bin directory and run repvfy:



Now fix the errors where possible (generic operations that EMDIAG can perform to fix common problems):

repvfy -fix

Some of the problems cannot be fixed automatically.
You can get more details by running verify with greater detail:

repvfy verify TARGETS -level 9 -detail

NOTE: You may want to redirect to a text file in Windows (> c:\TARGET_details.txt).



The detailed view may open a can of worms for you.
Good luck!

Monday, August 08, 2011

Report All Oracle User Accounts Through OEM Grid Control

You know the problem: You have Oracle Enterprise Manager Grid Control 10g installed and all your databases are configured in it.
It allows you to manage your entire landscape.  Great!

All of a sudden, a "leaver form" pops on your desk from HR with a long list of people who have left the company recently.
Do they have accounts in your Oracle databases?
If you have only one or two Oracle databases to check, then you can use the power of Grid Control to sort out the list in a few minutes.
What if you've got 50+ databases?
What if you've got 100+ databases?
Don't they say: "The little jobs are the most time consuming"?

You may maintain an external spreadsheet of Oracle accounts, or you may not.  If you do, or even if you don't, this article will show you how to configure a Oracle Enterprise Manager Grid Control 10g report that will show you all users accounts across the landscape at the click of a button.  You could use this technique to extend it to almost anything that can be done in an SQL or PL/SQL session.

Architecture overview:



We will be creating a new OEM Grid Control job called LIST_USER_ACOUNT_DETAILS, and a new report called LIST_USER_ACCOUNT_DETAILS.
The job is executed on the monitored database targets and it stores it's output in the usual MGMT$ table in the OEM repository database.
The report simply pulls and re-connects the job output.

In OEM Grid Control create a new job called "LIST_USER_ACCOUNT_DETAILS".
Add the database targets or target group.

Set the SQL script to be:

SET HEAD OFF;
SET PAGESIZE 9999;
SET LINESIZE 999;
SET TRIMOUT ON;
SET TRIMSPOOL ON;
SET FEEDBACK OFF;
SET COLSEP ' ';
COL MARKER FORMAT A2;
COL USERNAME FORMAT A30;
COL ACCOUNT_STATUS FORMAT A25;
COL LOCK_DATE FORMAT A12;
COL EXPIRY_DATE FORMAT A12;
COL HAS_REMOVED_ROLE FORMAT A30;
COL HAS_DBA_ROLE FORMAT A20;
WHENEVER SQLERROR EXIT FAILURE;
SELECT
'@|' marker,
username||'|' username,
account_status||'|' account_status,
'|'||TO_CHAR(lock_date,'DD-MM-RRRR')||'|' lock_date,
'|'||TO_CHAR(expiry_date,'DD-MM-RRRR')||'|' expiry_date,
'|'||DECODE((SELECT 'X' FROM dba_role_privs WHERE grantee = username AND granted_role ='REMOVED_ACCOUNTS'),'X','HAS ROLE "REMOVED_ACCOUNTS"','')||'|' has_removed_role,
'|'||DECODE((SELECT 'X' FROM dba_role_privs WHERE grantee = username AND granted_role ='DBA'),'X','HAS DBA ROLE!!','') has_dba_role
from dba_users;


Set the Parameters to be "-s" (silent):



Set the Credentials to use an account capable of querying DBA_USERS and DBA_ROLE_PRIVS (you could change this to use ALL_ views and use a lower priviledged account).
Set the schedule for the job. However frequently you wish to query the account details.

Submit the job for execution.

Once the job runs, the job output will be visible in the OEM MGMT$JOB_STEP_HISTORY table.
The actual SQLPlus output is stored in the OUTPUT column which is a CLOB field.
We have had to insert a marker in the SQL so that the output contains a '@|' at the beginning of each line. This is used to determine the start of each record in the OUTPUT CLOB.
We just need to write a report to pull out the details.

Create a new report called "LIST_USER_ACCOUNT_DETAILS".
On the "General" tab, select the "Use the specified target" option and enter the hostname of your OEM Grid Control repository server (DB server).
Select the "Run report using target privileges of the report owner (SYSMAN)" tick box.

On the "Elements" tab, add a "Table from SQL" item and then set the header, then paste in the statement below:

SELECT TARGET_NAME,JOB_TIME,ACCOUNT_DETAILS FROM (
SELECT '===============' TARGET_NAME,'================' JOB_TIME,'==================================================' ACCOUNT_DETAILS FROM DUAL
UNION
SELECT
TARGET_NAME,
TO_CHAR(END_TIME,'DD-MON-RRRR HH24:MI:SS') JOB_TIME,
-- non-regexp option -- TO_CHAR(REPLACE(SUBSTR(mh.OUTPUT,INSTR(mh.OUTPUT,'@| ',1,ct.x),INSTR(mh.OUTPUT,'@| ',1,ct.x+1) - INSTR(mh.OUTPUT,'@| ',1,ct.x)),'@| ','')) ACCOUNT_DETAILS
TO_CHAR(REGEXP_SUBSTR(mh.OUTPUT,'[^@]+', 1,ct.x)) ACCOUNT_DETAILS
FROM MGMT$JOB_STEP_HISTORY mh,
(SELECT rownum x FROM ALL_OBJECTS WHERE rownum<=5000) ct
WHERE TARGET_TYPE='oracle_database'
AND JOB_OWNER='SYSMAN'
AND JOB_NAME LIKE 'LIST_USER_ACCOUNT_DETAILS.%'
AND STEP_NAME='Command'
)
WHERE ACCOUNT_DETAILS IS NOT NULL
ORDER BY TARGET_NAME, JOB_TIME;

NOTE: In the report SQL, we have assumed a maximum number of 5000 user accounts across all database targets.
If you exceed this (if you have a lot of user accounts) then you will need to increase the number.
We have also assumed that SYSMAN is the owner of the job that created the output. You should change this if necessary.


You can choose to schedule the report if you wish (you should take into account the source job schedule).
It's probably best to secure the report so that only administrators can see it.

Test the report.


The output format consists of the following columns:
 TARGET NAME
 JOB TIME
 ACCOUNT DETAILS {
                 USERNAME
                 ACCOUNT STATUS
                 LOCK DATE
                 EXPIRY DATE
                 HAS REMOVED ROLE
                 HAS DBA ROLE
                }

The "HAS REMOVED ROLE" column relates to a "flag" role that I use to denote that an Oracle account has been de-activated purposely.
Instead of the account being deleted, it is retained for audit purposes and the REMOVED_ACCOUNTS role granted to it. You can choose to ignore or adapt this column.

Once you have the output, you can extract it to Excel for comparison with an Active Directory dump or any other means.

Friday, August 05, 2011

Optimal Oracle RMAN Backup Command Options


Just a very short post about an optimal Oracle RMAN backup command run block.
Here's what *I* think provides an optimal backup for a generic setup.

-- Check our existing files are available before we do some obsolete deleting.
crosscheck archivelog all;
crosscheck backup;


-- Delete any obsolete files before we start so that we have max space available.
-- I have "REDUNDANCY 1" set so that I retain the previous backup until the next backup is completed successfully (you need double disk/tape space available for this).
delete noprompt obsolete;

-- Do the DB and ARCH log backups but put each datafile into it's own piece file.
-- I prefer this because should I need only a specific datafile restored and I have to go to tape, I only need to get the piece that's relevant and this piece is only the datafile needed, saving tape restore time.
-- For Oracle 9i remove the 'as compressed backupset' option.
backup as compressed backupset database filesperset 1 plus archivelog filesperset 1;

-- Now only delete archive logs that are in two separate backups.  This is critical.
delete noprompt archivelog all backed up 2 times to disk;

-- Delete the previous night's backup from disk (obviously you need the space to be able to store two backups on disk).
delete noprompt obsolete;

-- Finally, always a good idea to keep a list of datafiles and temp files that existed at the time of the backup.
report schema;

Make sure that you always save the RMAN backup logs.
I've found that they can be very useful during a restore scenario.

As a tip, you can also run a daily job to list what backups are required to restore a database using the "RESTORE DATABASE PREVIEW SUMMARY;" command.

Tuesday, August 02, 2011

Basic Performance Tuning Guide - SAP NetWeaver 7.0 - Part III - Using SQL Trace

This is Part III, you may like to see Part I and Part II (or skip to part IV).

Once you have traced the poor performing program (see Part I of this guide), and you have identified that your problem is in the database response time (see Part II of this guide), then you can start digging deeper into the potential SQL problem by running an SQL trace using transaction ST05.
In ST05, click the "Activate Trace with Filter" button:

SAP ST05 Performance Analysis, activate trace with filter

NOTE: We are only concerned with an SQL statement that has directly accessed the database (http://help.sap.com/saphelp_nw04/helpdata/en/d1/801f89454211d189710000e8322d00/frameset.htm).
Unless you select the additional tick box "Table Buffer Trace" on the main ST05 screen, to show table buffer accesses, all the trace details in ST05 will be for direct database accesses (bypassing the table buffer).
Since direct database accesses will take longer than the buffer accesses, it is generally accepted that the longer runtimes will be associated with direct database access.

Enter your user id and the name of the report/transaction.  This is so that we can try to limit the records in the trace.  If you want to trace another user, simply enter their user id:

SAP ST05 trace by transaction name

Click Execute to start the trace:

ST05 trace activated

Now in a separate session (preferably) run the transaction/program with the performance problem.
Once you're happy you have experienced the problem, you can go back into ST05 and click "Disable Trace"  (by default it will remember the trace details and disable the running trace, you won't need to re-enter all the details):

ST05 trace deactivated

Still within ST05, click "Display Trace":

ST05 trace display

The system remembers the last trace recorded and should automatically populate the details, but you can always enter them for date/time, user id etc:

ST05 trace display for user

The SQL trace records are displayed.
The poor performing SQL has it's time (duration) in microseconds (millionths of a second) highlighted in red if it exceeds 10000 microseconds, 10 milliseconds or 0.01 seconds:

(1 second = 1000 milliseconds = 1000000 microseconds)

SAP ST05 SQL Trace output

At the top of the window we can see the transaction name (ZW39), the work process responsbile for executing the dialog step in which our statement was contained, the type of work process (DIA or BTC), the client and user and transaction id.
The OP column shows that this session has reused an existing cursor in the DB for this query.
This is because it has a "PREPARE" operation as the first OP before the cursor "OPEN" and not a "DECLARE" (see here http://help.sap.com/saphelp_nw04/helpdata/en/d1/801fa3454211d189710000e8322d00/content.htm).
If the session re-executed this query at another point in the trace, then it is likely (depending on code and available database resources), that the query would have re-used the existing cursor and even no "PREPARE" would have been visible, just OPEN and FETCH (http://help.sap.com/saphelp_nw04/helpdata/en/d8/a61d94e4b111d194cb00a0c94260a5/content.htm)
The deep yellow colour of each line will alternate between a dark and lighter colour when the table (obj name) changes in the list.

The RECS column shows the number of records retrieved in the "FETCH" operation.
The RC column shows the Oracle database return code 1403 (ORA-1403) which you get when you call "FETCH" in a loop and reach the end of the results (see SAP note 1207927).

The STATEMENT column shows the pre-parsed SQL including bind variables (:A1, :A2 etc).
The second line of the STATEMENT column shows the statement in the "OPEN" database call where all the bind variables have been normalised.
Double clicking on the STATEMENT column delves into the SQL statement a little more:

SAP ST05 SQL Trace output SQL statement with bind variables

Here you will see the SQL statement in full (above) and the bind variables, their data types (CH,3 = CHAR(3)) and respective values.
Back on the main trace screen, with the SQL statement selected (single click), you can click the "Explain" button:

SAP ST05 SQL Trace output SQL statement explain plan

Again, you will see the SQL statement with bind variables (remember you can get the values from the other screen) but more importantly, you will see the Oracle Explain Plan.
The Explain Plan shows how the Oracle optimizer *may* choose to execute the statement and the relative costs associated with performing the query (I'll explain more on this in a later chapter of this guide).

SAP ST05 SQL Trace output SQL statement explain plan

In the above example, you can see that Oracle has chosen to perform an Index Range Scan on index "IHPA~Z1" before going to the table IHPA to get the matching data rows.
Single click the "Access Predicates" on the index and you can see what will be applied to the index scan:

SAP ST05 SQL Trace output SQL statement explain plan access predicates

SAP ST05 SQL Trace output SQL statement explain plan access predicates

Again, you will need to reference the very first SQL view screen to get the bind variables values.
Clicking the name of the index will show the statistics values for the index:

SAP ST05 SQL Trace output SQL statement explain plan index analysis

SAP ST05 SQL Trace output SQL statement explain plan index analysis

You can see the two columns that make up the index (objnr and parvw).
The Analyze button can be clicked to update the index database statistics real-time or as a background job.
WARNING: Be aware that SAP collects statistics in Oracle using it's own set of predefined requirements as per SAP notes 588668, 1020260 and that you *must* disable the standard Oracle stats gathering jobs as per note 974781. Therefore, the stats may be out of date for a good reason.

The same detail can be seen when clicking the table name.

Back on the main screen, for those who have more knowledge of reading Oracle Explain Plans, you can choose the "EXPLAIN: Display as Text" button, which will display a more detailed Explain Plan that can be copied to a text editor also (I find this very useful):



Plan hash value: 1636536768

----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 82 | 2 (50)| 00:00:01 |
| 1 | INLIST ITERATOR | | | | | |
|* 2 | TABLE ACCESS BY INDEX ROWID| IHPA | 2 | 82 | 1 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | IHPA~Z1 | 2 | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

1 - SEL$1
2 - SEL$1 / IHPA@SEL$1
3 - SEL$1 / IHPA@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MANDT"=:A0)
3 - access(("OBJNR"=:A1 OR "OBJNR"=:A2 OR "OBJNR"=:A3) AND "PARVW"=:A4)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

1 - "OBJNR"[VARCHAR2,66], "PARVW"[VARCHAR2,6], "PARNR"[VARCHAR2,36],
"ERDAT"[VARCHAR2,24]
2 - "OBJNR"[VARCHAR2,66], "PARVW"[VARCHAR2,6], "PARNR"[VARCHAR2,36],
"ERDAT"[VARCHAR2,24]
3 - "IHPA".ROWID[ROWID,10], "OBJNR"[VARCHAR2,66], "PARVW"[VARCHAR2,6]



Finally, it's possible to see the point in the ABAP program where the SQL statement was run.
Single click the statement column, then click the "Display Call Positions in ABAP Program" button or press F5:



The ABAP code will be displayed, and if you are using the new ABAP editor (I think you should), you will be positioned on the statement:

SAP ST05 SQL Trace output SQL statement call in ABAP

Looking at the ABAP statement, you can see how it has been converted from an OPEN SQL statement into an Oracle SQL statement.
The use of the "IN" keyword has been translated into multiple "OR" statements at the Oracle level.

Once at the ABAP statement level, it is sometimes required to know both Oracle SQL and ANSI SQL, since Oracle generally uses it's own dialect and SAP uses ANSI SQL.
Therefore, you may see the original statement in a somewhat different form, especially when a table join is required on two or more tables.
Lastly, you will notice that the MANDT field is not present in the original ABAP, but it gets added to the predicate list by the ABAP OPEN SQL processor unless you use the "CLIENT SPECIFIED" keyword.

Armed with the SQL performance facts, the knowledge of how to perform an SQL trace and how to link this back to the piece of ABAP code; you should be able to use your database level investigative skills to work out why the statement is causing poor performance.

Make adjustments in the test system and see how the changes affect the performance.
I like to modify the Oracle SQL captured in the SQL trace, then enter it into ST05 (Execute SQL option) with the Explain button, to see if the explain plan is any better.

Some pointers:
- How much data is in the database tables being referenced?
- Is the Explain Plan showing that the query is making best use of available indexes?
- Can you replicate the problem in the test system?
- If this is a custom table, do you need to add additional indexes?
- Could the ABAP OPEN SQL statement be re-written to be more effective (use one query instead of two)?
- Are you SELECTing more fields than you actually need?

You may wish to read the master class in reading Explain Plans from Oracle.
I have blogged before about cardinality in Explain Plans here.

Find part IV of this blog post, here.

Monday, August 01, 2011

Back once again (with the renegade master...)

Some hefty time off spent on the beach in the south of Italy (ah che bello) with the in-laws, has revealed a slight sun-tan and a new sense of purpose.
- Part III of my SAP Basic Performance Tuning Guide is nearly ready for posting.
- A new job opportunity has come my way.
- The VMWare test rig will be getting another hammering once I have got some post holiday chores out of the way (still can't believe I got it working in 30 minutes!).

Bring on August!

SAP BI - ORA-04031 - Oracle 10.2.0.5 - Bug 9737897 - End Of.


If you use SAP BI 7.0 on an Oracle 10.2.0.5 64bit database you may experience this issue.
Our BI system performs tha delta loads for FI data overnight.
These are large loads of records into the cubes.

The problem is Oracle error 04031:
----------------------------------------------------------------------------------
Mon Jul 11 00:46:26 GMT Daylight Time 2011
Errors in file j:\oracle\bwp\saptrace\usertrace\bwp_ora_3716.trc:
ORA-00603: ORACLE server session terminated by fatal error
ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","select name,online$,contents...","Typecheck","kgghteInit")

ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","select name,online$,contents...","Typecheck","kgghteInit")

ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","select name,online$,contents...","Typecheck","kgghteInit")

Mon Jul 11 00:46:26 GMT Daylight Time 2011
Errors in file j:\oracle\bwp\saptrace\background\bwp_smon_8756.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","select file#, block#, ts# fr...","Typecheck","seg:kggfaAllocSeg")

Mon Jul 11 00:46:27 GMT Daylight Time 2011
Errors in file j:\oracle\bwp\saptrace\background\bwp_smon_8756.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-04031: unable to allocate 4120 bytes of shared memory ("shared pool","select file#, block#, ts# fr...","Typecheck","seg:kggfaAllocSeg")

----------------------------------------------------------------------------------


Once it hits this problem, it causes the rest of the chain to fail on the index rebuilds.
Sometimes the user queries also fail in the Bex Web front-end. The user sees a similar error as those shown above.

Using V$SGASTAT I was able to take snapshots of the Shared Pool usage via a Windows scheduled task.
I traced the problem to the memory allocation of "obj stat memo" in the Shared Pool of the database.
It's constantly growing, and although the "free memory" reduces slightly, it doesn't seem to correlate to the increase in "obj stat memo" usage.

So next step was looking on Oracle Support.
I found bug 9737897, which is valid *upto* 10.2.0.4. But it does not state that it has actually been fixed in 10.2.0.5.
The bug description says: "An SGA memory leak is possible for "obj stat memo" and "obj htab chun"
memory if a segment is dropped and recreated many times."

The bug note recommends querying x$ksolsfts (see here about this table http://yong321.freeshell.org/computer/x$table.html).
The following SQL is recommended by Oracle:

select distinct fts_objd, fts_objn from x$ksolsfts x
where not exists (select 1 from obj$ o
where o.dataobj#=x.fts_objd and o.obj#=x.fts_objn);

The note says that a high number of records returned could indicate that you are seeing the symptoms of the bug.
I was seeing over 160,000 records in a database that had been restarted 24 hours prior.

I produced a lovely Excel chart showing the snapshot times against the memory use of the "obj stat memo" component.
The chart showed significant increases (step ups) of approximately 10MB at specific points in time.
I was then able to search SM37 using the "extended" option, to show background jobs active during the memory "step ups" identified by the chart.
Generally I saw 10-20 jobs active at the time, but the correlation showed that more often than not, the "step ups" were at near enough the same time as a BI_PROCESS_INDEX job.

Looking in the background job log, I was able to identify the process chain and the type of index/load that was being performed.
The indexes in question were FI cube related. They were quite large indexes and they were bitmap indexes ("CREATE BITMAP INDEX...").
Each night these indexes got re-built after a load.
The re-build command was visible in the background job log and included "COMPUTE STATISTIC".

So I had successfully identified a segment(s) that was being dropped and re-created regularly.
This met with the Oracle bug description.

At the bottom of the bug note, it states:

"Note:
If you suffer "obj stat memo" memory leak problem in 10.2.0.5, you can try
_object_statistics = false as a workaround, or following init parameter to see
if it solves problem.
_disable_objstat_del_broadcast=false"

So what does SAP say about this.
Well the following SAP notes were/are available:
1120481 - ORA-4031 - memory allocation errors for object-level stat
1070102 - High memory allocation by 'obj stat memo'
869006 - Composite SAP note: ORA-04031

The only note of relevance is 1120481 which has a short reference to Oracle 10.2.0.4.
The other notes tend to point the read at just blindly increasing the Share Pool. I tried this, it didn't work when 900MB was used by "obj stat memo"!

So, stuck with only one possible solution (other than upgrading to 11g), I can confirm that setting "_object_statistics = false" (alter system set "_object_statistics"=false scope=spfile;) and bouncing the database solved the problem as it completely removes the use of the "obj stat memo" memory area (it's no longer visible in V$SGASTAT).