Using Google Chrome?

Download my
free Chrome Extension, Power Notes Searcher, to make searching for and evaluating SAP notes, much easier.

Recent Posts

Thursday, December 27, 2012

impdp to Import Packages but No Overwrite / Create or Replace

Oracle say that you should be using impdp for all 11g import operations.

If you’re using impdp to import a dump file which contains some packages, you need to be aware that when you run an import with impdp, the package source is not automatically recreated i.e. your export will have exported the package create header with “CREATE PACKAGE ...” and not “CREATE OR REPLACE PACKAGE ...”.

Consequently, when you run the import of the package(s), if they already exist in the database, they will not be overwritten or replaced.
There is no command line option for impdp to change this feature (like “table_exists_action=replace” for TABLES).
You have to manually extract the package source from the exported dump file using the impdp “SQLFILE” command line option, which will create a separate SQL file containing the package sources.
Then modify the generated script file to change the “CREATE PACKAGE ...” to “CREATE OR REPLACE PACKAGE ...”.

If you're running on UNIX, you can use the power of SED to accomplish this task:

> impdp <user>/<password> DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp SQLFILE=scriptfile.sql

> cat scriptfile.sql | sed ‘s/^CREATE PACKAGE/CREATE OR REPLACE PACKAGE/’ > scriptfile2.sql

NOTE: You need the “^” in the command so that it only replaces the first occurrence of “CREATE PACKAGE” on a line, otherwise you might replace something you need.

Thursday, December 20, 2012

Estimating Oracle Export Space Requirements

To estimate a full Oracle export space requirements, you can use DataPump.
The below command estimates for a full export of the database.
It makes use of the ESTIMATE_ONLY option on the expdp command line.

First you need to create your directory in the database:

> sqlplus / as sysdba

SQL> create directory dp_dump as '/your_path';

SQL> quit;

> expdp \"/ as sysdba\" full=y directory=DP_EXPORT logfile=full_exp_estimate.log estimate_only=yes

Total estimation using BLOCKS method: 2.418 GB

Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at 18:50:48

The method above runs through each object to be exported and calculates the number of blocks relevant to the block size of each object, that will be required on disk.

If you have a large database and your statistics are up-to-date, then you could use the additional “ESTIMATE=STATISTICS” option, which uses the data gathered from the Oracle statistics collections to estimate the space required. This is a lot quicker but needs accurate stats.

The example above took 1 min.

With the “ESTIMATE=STATISTICS” option, it took 46 seconds, but estimated only 991.3 MB would be required (half as much as the BLOCKS method).  There’s obviously some missing stats on objects in my DB.

Friday, December 14, 2012

HowTo: Find the Transaction Codes used in SPRO

It's not easy trying to grant specific access to customising transactions when you;re not sure which ones will be required.

If you know they are accessible from SPRO, then you can find them using the following query, which lists TCODES from SPRO customising that has been performed after installation (note the date of 2001):

SELECT distinct cobj.tcode
FROM cus_actobj cobj,
cus_acth cacth
WHERE cobj.act_id = cacth.act_id
AND cacth.fdatetime > '20019999999999'

Sunday, December 09, 2012

Controlling FUGR Access

If you have a super user role in your production system, it’s quite possible that use of the SUPRN_INS_OR_DEL_PROFILE or BAPI_USER_PROFILES_ASSIGN function modules could be used from SE37 to give SAP_ALL and then proceed to get around other restrictions.

Since SUIM relies upon S_DEVELOP and activity 16, you can’t just remove it unless you leave the admins to find their own way to the underlying transactions.
Instead, you could lock down the S_DEVELOP authorisation object by removing access to the specific function group (FUGR).

When editing the authorisations of a role, set one of the S_DEVELOP authorisation objects to exclude FUGR (and DEBUG):
Activity = 03 & 16 (Display & Execute)
Package = *
Object Type =
4              to DE
DEVC     to FU
FUGS     to Z
Z              to $TM

Then add another S_DEVELOP authorisation object “MANUALLY”.
Set this to exclude the SUPR, SUPRN and SU_USER function groups:

Activity = 03 & 16 (Display & Execute)
Package = *
Object Name =
0              to SUNI
Z*           to $*
Object Type = FUGR

Saturday, November 17, 2012

SAP_ALL modified role

Have you got a SAP Development system where the developers insist they have SAP_ALL, but you know this is just wrong.
Well, here's a neat solution that removes certain authorisations like user admin in SU01 and adjusting auditing (SM19), RFC admin (SM59) etc.

Using transaction PFCG, create a new single role.
Add a description and save the role.
On the Authorisations tab generate a new profile and then edit the authorisations "Change Authorization Data".
Do not select any Templates.
From the "Authorizations" screen, select "Edit -> Insert Authorization(s) -> from profile...".
On the popup, enter profile "SAP_ALL".

Now adjust the profile as required.
I usually adjust the following authorisation objects:

S_OSS1_CTL - 16
S_USER_AGR - 03, 08
S_USER_AUT - 03, 08
S_USER_GRP - 03, 08
S_USER_PRO - 03, 08
S_SCD0 - 08, 12

You can then save and assign the role to the developers.

Monday, October 29, 2012

Values for SAP Auth Objects F_REGU_BUK and F_REGU_KOA

Whilst configuring some new read-only SAP roles for FICO access, I was struggling to find the desctiptions for the actions for authorisation objects F_REGU_BUK and F_REGU_KOA   values for FBTCH (Action for Automatic Procedure).

I found that these were actually viewable in transaction F110, then on menu select “Environment -> Authorizations”.

A popup is displayed with the following legend:

Key Action
02 Edit parameters
03 Display parameters
11 Execute proposal
12 Edit proposal
13 Display proposal
14 Delete proposal
15 Create payment medium proposal
21 Execute payment run
23 Display payment run
24 Delete payment run payment dat
25 Create payment media of paymen
26 Delete payment orders of payme
31 Print payment medium manually

For a read-only role, I would recommend only actions 03,13,23 and possibly 31 (in case the electronic BACS payment method breaks).

Monday, October 22, 2012

All Reports & Transactions Under SUIM

The list below is useful if you are constructing a roll to house the SUIM capabilities:

Users by System S_BIE_59000198
Users by Roles S_BIE_59000199
Users by Profiles S_BIE_59000197
Users by Address Data S_BCE_68001393
Users by Complex Selection Criteria S_BCE_68001400
By user ID S_BCE_68001394
By Role S_BCE_68001399
By Profiles S_BCE_68001395
By Authorizations S_BCE_68001396
By Authorization Values S_BCE_68001397
By Transaction Authorizations S_BCE_68001398
By Critical Combinations of Authorizations at Transaction Start S_BCE_68001401
With Unsuccessful Logons S_BCE_68001402
By Logon Date and Password Change RSUSR200
List of Users With Critical Authorizations S_BCE_68001403
With Critical Authorizations (New Version) S_BCE_68002111

Roles by Complex Selection Criteria S_BCE_68001425
By Role Name S_BCE_68001418
By User Assignment S_BCE_68001419
By Transaction Assignment S_BCE_68001420
By MiniApp S_BIE_59000249
By Profile Assignment S_BCE_68001421
By Authorization Object S_BCE_68001422
By Authorization Values S_BCE_68001423
By Change Dates S_BCE_68001424

Profiles by Complex Selection Criteria S_BCE_68001409
By Profile Name or Text S_BCE_68001767
By Profiles Contained S_BCE_68001404
By Authorizations S_BCE_68001405
By Authorization Values S_BCE_68001406
By Last Change S_BCE_68001407
By Role S_BCE_68001408

Authorizations by Complex Selection Criteria S_BCE_68001417
By Object S_BCE_68001414
By Values S_BCE_68001415
By Last Change S_BCE_68001416

Authorization Objects by Complex Selection Criteria S_BCE_68001413
By Object Name, Text S_BCE_68001410
By Object Class S_BCE_68001411
By Field, Text S_BCE_68001412

Executable Transactions (All Selection Options) S_BCE_68001429
Executable for User S_BCE_68001426
Executable for Role S_BCE_68002041
Executable with Profile S_BCE_68001427
Executable with Authorization S_BCE_68001428

From users S_BCE_68001430
from Roles S_BCE_68001777
From profiles S_BCE_68001431
From authorizations S_BCE_68001432
In Users S_BCE_68001399
In Users S_BCE_68001395
In Roles S_BCE_68001421
In Composite Profiles S_BCE_68001404
In Users S_BCE_68001396
In Profiles S_BCE_68001405
In Users S_BCE_68001397
In Roles S_BCE_68001423
In Profiles S_BCE_68001406
In Authorizations S_BCE_68001415
In Programs S_BCE_68002030
For Users S_BCE_68001439
for Role Assignment RSSCD100_PFCG_USER
For Roles RSSCD100_PFCG
For Profiles S_BCE_68001440

For Authorizations S_BCE_68001441

Tuesday, October 16, 2012

How to Patch an Oracle Database Under SAP

Are you thinking of patching an Oracle database which sits under an SAP system?
If you have a specific bug and you've identified the Oracle patch number that fixes the bug, you'd be tempted to just download the patch from Oracle.

According to SAP, you should not download any patches from Oracle directly.  As you know, the Oracle binaries themselves are slightly different for an SAP system.
Instead, if you have the Oracle patch number, search through the README files that come as part of the SAP Bundle Patch (SBP) for Oracle downloads located on the marketplace:  to see if the Oracle patch is included in the bundle patch.

If you can't see it in there, then it may be worth asking SAP to clarify if/when they may include it in the next bundle patch.
Each bundle patch is released monthly, but it may not mean that relevant Oracle patches older than a month are included in the bundle.

The bundle patches themselves are cumulative, so you only need to apply the latest one.  It includes specific Oracle patches, plus a CPU patch (dependent on the date/time of the released SBP).

Remember to re-check the SAP notes about Oracle database parameters after applying SBPs, since SAP usually update the notes at each SBP release, to include any relevant _fix_control or event parameter settings.

Thursday, October 11, 2012

Recover Oracle DB With Missing DataFiles or Missing UNDO or Multiple ResetLogs Issue

If you have run through the Create ControlFile script method (why are you still using this and not the NID utility!!?) but it failed because you were missing one of the datafiles, then you will struggle to open and resetlogs the database due to Oracle errors.
You will then retry the process (maybe with the correct datafiles in place this time) and it will fail because you've run resetlogs too many times and the datafiles are all out of sync with the incarnation of the database.

The post also assumes that one of the missing datafiles is required for your UNDO tablespace.

This post is based on the post here:  and will guide you through the process of getting the database back, provided you now have the required datafiles in place (you don't have to have them, but it would be good if you did).

1, Delete all existing database control files at the operating system level.
These control files are useless to us as they contain the new DB name and sequence numbers, but we can't use them as we will have to re-run the CREATE CONTROLFILE statement.

2, Change the DB init file to use rollback segments (as you have no automatic UNDO without your undo tablespace) and allow resetlogs to corrupt the database, by putting the following in the init<SID>.ora:


3, Edit the CREATE CONTOLFILE SET DATABASE “<SID>” script to comment out ALL tablespaces except SYSTEM.
This will allow us to create a basic database will rollback.

4, Create the database using the CREATE CONTROLFILE script as you would have done previously.

5, Alter database open resetlogs.

6, Re-name all “MISSING” datafiles based on FILE_ID taken from production.
NOTE: If you don't know this information (file_id to filename mapping), then you will become very stuck.

alter database rename file '/u01/app/oracle/product/10.2.0/db_1/dbs/MISSING00004' to '/u01/app/oracle/oradata/day/users01.dbf';

7, Bring all “MISSING” data files online (first attempt):
alter database datafile 4 online;

You will see:

ERROR at line 1:
ORA-01190: control file or data file 4 is from before the last RESETLOGS
ORA-01110: data file 4: '/u01/app/oracle/oradata/day/users01.dbf'

8, Shutdown immediate.

9, Startup mount.

10, Recover until cancel.

11, Bring all “MISSING” data files online (again):
alter database datafile 4 online;

12, Alter database open resetlogs.

13, Re-Create UNDO tablespace as per the previous specifications (take the spec from production) (maybe use REUSE command so it doesn’t need to create the file) . Use the same file name as before.

14, Shutdown IMMEDIATE.

15, Adjust the init<SID>.ora to re-add the UNDO tablespace and remove the entries that you added in step #2.

16, Startup.

Thursday, October 04, 2012

SAP Authorisation Objects Naming Convention

The first letter of SAP authorisation objects is intelligently coded to represent the SAP module for which it belongs:
e.g. F_KNA1_BUK

A   Assets Accounting
C   Classification System
E   Consolidation
F   Financial Accounting
G   Special Ledger
K   Controlling
L   Logistic execution
M   Materials Management
P   Human Resources
S   Basis
V   Sales and Distribution

If the second character is an underline, then this indicates this authorisation object is a SAP standard one.

Use transaction SU03, SU21 or table TOBJ, to list the authorisation objects in the system and drill-down into the authorisation fields and their possible values.

If using the tables, you may need the other related tables to pull the texts: TOBJ, TOBC (classes), TOBJT.

Wednesday, September 26, 2012

SAP Users With Roles Not Assigned via Composite Roles

Have you ever needed to list SAP roles that are assigned to user accounts, but show only the single roles that are directly assigned (not single roles inherited through composite roles)?

Here's how you can do it:
Using SE16, get the records from AGR_USERS table with field COL_FLAG=' '

Relate this to USR02 table BNAME field to decide if the user account is locked (valid) or not in use anymore.

I've also discovered this can be done in transaction S_BCE_68001394 (Users by User ID).  You just input * into the user ID field, execute the report and then sort the two columns for "Direct Assignment" and "Role Type".  This will give you the Single roles assigned directly.

Monday, September 24, 2012

Oracle ReCreate DBSNMP User

Here's how to drop and recreate the DBSNMP Oracle user:

> sqlplus / as sysdba

SQL> @?/rdbms/admin/catnsnmp.sql -- drop user

SQL> @?/rdbms/admin/catsnmp.sql -- create user

Wednesday, September 12, 2012

Oracle 11gR2, KEWBMBTA: Maintain BSLN Thresholds failed

If you tighten up the security in an Oracle 11gR2 database, you may be tempted to remove access to the DBMS_OBFUSCATION_TOOLKIT package from PUBLIC ("revoke execute on DBMS_OBFUSCATION_TOOLKIT from PUBLIC;").

This causes an issue, seen in 11gR2
The error was visible in the user traces:

*** 2012-09-11 20:00:43.692
KEWBMBTA: Maintain BSLN Thresholds failed, check for details.

This was tracked to an invalid package body DBSNMP.BSLN.

SQL> alter package DBSNMP.BSLN compile body;

Warning: Package Body altered with compilation errors.

SQL> show errors

-------- -----------------------------------------------------------------
88/5 PL/SQL: Statement ignored
88/19 PLS-00201: identifier 'DBMS_OBFUSCATION_TOOLKIT' must be declared
200/7 PL/SQL: Statement ignored
200/21 PLS-00201: identifier 'DBMS_OBFUSCATION_TOOLKIT' must be declared
241/7 PL/SQL: Statement ignored
242/8 PLS-00201: identifier 'DBMS_OBFUSCATION_TOOLKIT' must be declared
1332/7 PL/SQL: Statement ignored
1332/21 PLS-00201: identifier 'DBMS_OBFUSCATION_TOOLKIT' must be declared

Re-granting access to the DBMS_OBFUSCATION_TOOLKIT package to the PUBLIC user, fixes the issue again:

SQL> connect / as sysdba


Grant succeeded.

SQL> alter package DBSNMP.BSLN compile body;

Package body altered.

Wednesday, September 05, 2012

GRANT or REVOKE on User in DB Link Connection

If you GRANT additional privileges to an Oracle user who is used in a DB Link connection (from another database), then if the DB Link is already open and in use (the target DB has ACTIVE or INACTIVE sessions), you will need to close the DB link connection before you see the new privileges.
The same is true if you grant a new role to the user.

However, if you REVOKE privileges from an Oracle user who is used in a DB Link connection then these changes are seen immediately the next time the user is used to perform an operation across the DB link.

Monday, September 03, 2012

Oracle Support for GFS/2 Cluster Filesystem

When implementing Oracle into a RedHat Linux based landscape, you may be looking for a cluster filesystem other than Oracle's own ASM offering (or clusterware if you like old hats).
Oracle have updated their support status for GFS/2.

Checking out My Oracle Support doc id 279069.1 for 11g and RHEL 5, it says:
GFS2 is in technical preview in OEL5U2/RHEL5U2 and fully supported since OEL5U3/RHEL5U3.

You can now get that RHEL cluster up and running in production.

Monday, July 16, 2012

Downtime & Language Barriers

Whilst I took some time out on holiday, I was able to get away from the almost non-existent UK summer and off to a much warmer climate.
It got me thinking about the language of IT and the difficulties that other non-native English speakers may be exposed too when trying to decipher Oracle docs or SAP notes, or any other form of documentation.

In my experience, I've often had difficulty reading SAP notes that have been translated to English from German and Oracle docs that have been written in English from the off, but badly worded.
I think the Plain English Campaign would have a field day with some of the material I've had to put up with.

If you're working with a mission critical system and you can't be 100% sure on what the documentation is asking you to do, could it reflect badly on you as a professional?  Possibly.

I guess Google Translate is probably the most preferred tool for translating notes and docutentation, unless the reader is confident enough in what they already know.
So would this mean those that perform IT as a profession, in a non-English speaking country, must know the products more in-depth and rely less on the documentation?  Maybe.

Monday, June 25, 2012

R/3 to ECC - Benefits of not upgrading anything?

This is a question that will probably be asked by many IT persons over the coming months, as SAP draws to a close support for the SAP R/3 4.7 system.
(see the SAP production availability matrix
Whilst upgrading to ECC will mean a SAP supported system, what other options are out there?
Let's look at just a few so that you may have some ideas that you maybe hadn't considered.

- Stay where you are and pay for extended support.
This is an interesting option.  Let's face it, if you use SAP as a basic product e.g. for accounting or sales transactions, then exactly what else will you need from a product in the future?  Why not save the upgrade costs and simply pay for extended support, and keep paying each time it expires.
Whilst the initial support costs may be known, the future costs are not and SAP could hike these.  Also, there may be a fairly straight upgrade path to a newer product at the moment, but in the future you may have to follow that path, plus the additional paths and intricacies of upgrades to later versions in order to reach something more modern (UNICODE anyone!).
Things like OS support may bite you eventually, and those of you on HP-UX Itanium are already seeing what happens to non-x86 based operating systems when companies like Oracle decide to stop supporting you.  Your future upgrade path could involve skill-sets no longer available/costly, or even more lengthy processes because you're moving from older hardware.
On the positive side, the future could hold hope in the form of faster systems, smarter tools and cheaper processes that could make future upgrades/migrations faster and cheaper than doing it now.  A big database in the future may not be so big in relational terms.

- Stay where you are and don't pay for extended support.
You will loose all access to standard SAP support sites and tools, plus you will not benefit from any DB updates or DB vendor support.
This could be very problematic if your business needs to apply SAP legal patches for changes to HR related functions within the SAP modules.
I'm not entirely sure if you will still be able to request SCCR keys for modifying SAP objects or even be able to develop your own ABAP code in your own system.  Maybe someone can let me know on that one.
Some of the words in Oracle Database support contracts state that you may have to back-pay for support if you decide to re-enable support at a later date.  I'm not sure if SAP would be the same.
You would potentially suffer during external audits if additional security related legislation comes along (SOX for example) and you are not able to apply the updates/functionality to provide that security.

There are some common issues with both options above.  These mainly centre around the IT resources that are supporting those systems.  Nobody likes to stay still in IT.  Not unless they are happy in the knowledge that retirement is looming and they just need to keep rolling in the meantime.
The constant need to keep abreast of the latest technical enhancements/changes is one of the most difficult aspects of the IT profession.
However, with the advent of off-shore IT resources, it should be possible to secure long-term support resources even if you can't secure them on-shore.  Having said that, I don't yet know of any off-shore company that has a high retention level.  Maybe this is coming...

In summary, there are some cost advantages in the short term for not upgrading an SAP system.  But unfortunately those costs may hit you in the end in some form or another.

Monday, June 18, 2012

SAP - Oracle on VMware

Whilst researching the whole Oracle on VMware thing recently, I found a great number of articles on the subject.  Here's a collection of the links I found most helpful whilst getting to know the principles of virtualisation all the way through to publicly available information on the performance of Oracle on the VMware vSphere platform.

Hardware Abstraction Layer definition (

Wikipedia definition (,
VMware’s definition (,
VMware’s definition (

Intel VT-x Instruction Sets (

VMware ESX and ESXi:
What does ESX stand for (,
Comparing ESX with ESXi (,
ESXi v5.0 released (,
ESXi architecture white paper (

VMware vSphere:
Product offerings (,
Support (,
Kits (,
What’s new in v5 (,
Supported guest OS (,
VMware Oracle support policy (,
VMware HCL (,
vSphere Licensing and cost (,
Forrester Report - Total Economic Impact Of VMware vSphere Virtualizing Mission-Critical Oracle Databases (,
Oracle on VMware vSpehere 4 Essential Deployment Tips (,

RedHat Certifies Linux on VMware (

Microsoft will support Windows on VMware (, (

Support Oracle RDBMS, Linux on VMware (,
Oracle Linux same as RHEL (,
Oracle licensing (, (,
Oracle License & Service Agreement (,
Oracle Processor Core Factor Table (,
Oracle VM Tolly Group report (
Running Oracle Production Databases on VMware (

Tolly Group (

Running SAP on VMWare (,
TCO and ROI of running SAP on VMWare (,
SAP Note: 1122387 – Linux: SAP Support in virtualized environments (,
SAP Note: 1122388 – Linux: VMware vSphere Configuration guidelines (,
SAP Note: 1492000 – General Support Statement for Virtual Environments (,
SAP Insider – 3 common misconceptions (

Monday, June 11, 2012

Make Show a List Of Labels By Title Only

NOTE: I have since stopped using feed2js as blogged about below.  I'm writing a new post on how to do this using the Google Feeds API.

Do you use for your blogging?
One of the short comings of is the categorisation of posts.  Also known as tagging or labeling.
There is no neat way in that you can list all posts by a specific label.  The standard method produces a massive page listing all the contents of all the pages containing your selected label.
This is not very helpful if you're looking for something and it's also very slow to load.

After spending much time looking for a solution I found feed2js.  A Google Code project that allows you to embed an RSS feed into a web page using JavaScript.

Here's an example I've been using on my blog since day one:
Not only is it just what I need, but I've managed to program it so that it can dynamically take a parameter on the end of the URL and display any specified label.
Example:   will only display pages containing the "Musing" label.

Want to see how it works?
Here's the code:

<div dir="ltr" style="text-align: left;" trbidi="on">
<script type="text/javascript">
var mylabel='<MyDefaultLabel>';
var key='label';
var regex = new RegExp("[\\?&]"+key+"=([^&#]*)");
var qs = regex.exec(window.location.href);
if(qs != null)
   mylabel = qs[1];
var cssStr = ".rss-title {visibility: hidden; margin: 0em; border: 0px; font-size: 1;} .rss-box {margin: 0em; border: 0px;}";
var style = document.createElement("style");
style.setAttribute("type", "text/css");
if(style.styleSheet){// IE
style.styleSheet.cssText = cssStr;
} else {// w3c
var cssText = document.createTextNode(cssStr);
document.write ('<h2>All Posts For Label: ', mylabel, '</h2>');
document.write('<script src="<BLOGURL>%2Ffeeds%2Fposts%2Fdefault%2F-%2F', mylabel, '&amp;desc=100&amp;date=y&amp;utf=y" type="text/javascript"><\/script>');
// -->
Thanks to <a href=""></a>

Copy and paste the code (as text without formatting) into a new Page in your blog (not a Post!, a Page).  Call the new Page all-blog-posts.html
In the code (in "HTML" mode when editing the page) change the line containing "<MyDefaultLabel>" to be your default label name (case sensitive).
You should also change the text "<BLOGURL>" to be your blog's URL without the "http://" bit.
By default the code is set to display 100 characters of each entry under it's title.  You change the number 100 to be 0 for none or a larger/smaller number.
You can also choose to remove the "Thanks" line, but it's nice to give recognition for hard work.

Now you can call the all-blog-posts.html page from anywhere in your blog (or outside your blog) and just append the label name (case sensitive) after the page URL plus a question mark and the text "label".

Restarting SAP During Online BRBackup

If your online BRBackup has been started from outside of SAP (e.g. by an enterprise scheduling tool perhaps), then you can safely restart the SAP application without affecting the backup.

Simply use the “stopsap r3” command option to bring down only the SAP application and leave the database running.

If the backup was scheduled within SAP (e.g. through DB13), then you are not advised to do this since SAP will be waiting for the BRBackup return code.

Wednesday, June 06, 2012

Self Extracting tar Archive UNIX

There's a very old post here
which shows how to create an executable script with an embedded tar file, which will effectively auto-extract once executed at the command prompt.
Very neat indeed!

NOTE: when accessing the URL, you need to cancel the password prompts multiple times.

Saturday, June 02, 2012

SAP Statistics & DBSTATC

As I've blogged about before, SAP recommend that you disable the Oracle stats gathering job in Oracle 10g databases, and use their own job controlled through DB13 (or DB13c).

The reason for this, is that some of the SAP database tables have specific requirements when it comes to statistics gathering.  These tables are registered in the SAP table DBSTATC.
This table controls which tables should have statistics collected (active or not active) and the method used to collect the stats (estimate or full).
All other tables not in DBSTATC, will have their stats collected using the default BRConnect settings.

If you have custom "Z" tables, you can insert your own entry into the DBSTATC table if you know for sure that your table has specific requirements.

You can use transaction DB20 to check the status of statistics on tables.

When BRConnect runs, you will see in the log file how many tables have stats gathered using the default setting, vs those that are in DBSTATC.

Entries in DBSTATC which are disabled (should have not stats gathered) will register in BRConnect's DBCHECK log as having harmful statistics if those tables are found to have statistics on them.  You can then use BR*Tools to remove the harmful statistics, but you should attempt to work out why you have got statistics on a table that you shouldn't, otherwise they may re-appear.

You need to be aware that SAP provide a specific SAP note, 403704 that contains updates/revisions for entries in DBSTATC.  It may be worth reviewing this note once a year, as you may find some performance benefit from updated statistics, or removal of statistics on tables.

Take a look at Tom Cenens' blog for additional information on this subject:

SAP help for BRConnect:

Tuesday, May 29, 2012

Intel CPU Flags from dmidecode on Linux

If you run the dmidecode command on a Linux machine, you get a nice list of the Intel CPU flags for the CPUs on your machine, plus a handy text description:

FPU (Floating-point unit on-chip)
VME (Virtual mode extension)
DE (Debugging extension)
PSE (Page size extension)
TSC (Time stamp counter)
MSR (Model specific registers)
PAE (Physical address extension)
MCE (Machine check exception)
CX8 (CMPXCHG8 instruction supported)
APIC (On-chip APIC hardware supported)
SEP (Fast system call)
MTRR (Memory type range registers)
PGE (Page global enable)
MCA (Machine check architecture)
CMOV (Conditional move instruction supported)
PAT (Page attribute table)
PSE-36 (36-bit page size extension)
CLFSH (CLFLUSH instruction supported)
DS (Debug store)
ACPI (ACPI supported)
MMX (MMX technology supported)
FXSR (Fast floating-point save and restore)
SSE (Streaming SIMD extensions)
SSE2 (Streaming SIMD extensions 2)
SS (Self-snoop)
HTT (Hyper-threading technology)
TM (Thermal monitor supported)
SBF (Signal break on FERR)

Monday, May 21, 2012

Script Korn Shell to use Bold Output

Here's how you can use a normal Korn shell script to put out bold text on the screen providing the emulator you are using supports it:

$ bold=`tput smso`
$ regular=`tput rmso`

$ echo "${bold}Please type in your name: \c"
$ echo "${regular}OK"

See the man page for tput for a wild array of other capabilities.
It's possible you can even use tput to draw in different parts of the screen.
This could be used to create a drop-down box or a menu select box in KSH.
It can be done!  Honest!

Monday, May 14, 2012

SQLPLUS ORA-01031: insufficient privileges as SYSDBA

This issue is commonly caused by the operating system group you are currently using, not having the correct permissions inside the Oracle binaries.  On UNIX, the Oracle binaries are always re-compiled after installation, with the group name of the "dba" group.  If your current user is not a member of that group, then you can't use "AS SYSDBA".  It's a protection measure to restrict the O/S users that can access the database "AS SYSDBA".
On Windows, the group is created as a local Windows group into which you need to add your user account (or group).  The local group name is not compiled into the Oracle binaries.

In UNIX, it's possible when using SQL*Plus to connect to an idle instance AS SYSDBA, you get “ORA-01031: insufficient privileges” even though you have changed the $ORACLE_HOME/rdbms/lib/config.c file database group to match your UNIX group and re-compiled the Oracle binaries.

As specified in Oracle note 400459.1, you should move the old $ORACLE_HOME/rdbms/lib/config.o to config.o.old and then re-link oracle using “relink oracle”.

Check that a new config.o file is generated, before performing the “relink all”.

For some reason, using relink doesn't always seem to trigger a rebuild of config.o.

Friday, May 11, 2012

IBM WebSphere IHS Plugin Hostname Cookies Conundrum

Whilst attempting to set up IBM WebSphere with a remote IBM HTTP Server (IHS) using the WAS Plugin, I was hitting an issue whereby cookies would only be served to the client browser when accessing the IHS server using the real host name and not the DNS alias I had created.

After 2 days of trying different combinations of configuration, I finally found this article here:

The host name DNS alias I had created used underscores.
Whilst this is permitted in the system, it is not a valid host name accord to the IETF RFCs, and therefore no cookies are served/retrievable by the WebSphere application server.

In short, always use hyphens instead of underscores in your WAS or IHS DNS aliases.

Thursday, May 10, 2012

Checking Sun/Oracle GlassFish Server Version

Here's how to check the version of Sun/Oracle GlassFish application server which is installed in a Linux environment:

From the bin directory of the GlassFish binary home directory:

$ asadmin version
Version = Sun GlassFish Enterprise Server v2.1
Command version executed successfully.

Monday, May 07, 2012

IBM WebSphere 6.1.0 GUI Install on RHEL 4.5

If you have been given the opportunity to install IBM WebSphere 6.1.0 on RedHat Enterprise Linux 4.5, then you may be wondering why you can't get the GUI installer to launch using the X-Windows terminal you've painstakingly enabled.

Well, the answer is simple; unless you are on the console, then you can't.  That is, you must be on DISPLAY ":0.0".
If your sys admin is denying you access, or it's a VM and you're not allowed console access, then you're also stuck.

The only other solution is to use the response files provided in the install directory of the installation media.
Customise the response file and then run the in silent mode.
There is a very helpful IBM help page on how to do all of this here:

Good luck.

ps. It presents a more systematic, repeatable approach to installing an environment if you use response files.  No mistakes, same every time.
What's more, you can also silently install the "UpdateInstaller" and also the FixPacks.

Monday, April 30, 2012

Oracle AUD$ TIMESTAMP# from 9i to 10gR2

You select TIMESTAMP# from AUD$ in an Oracle 10gR2 or 11g database, but you seem to be missing the most recent audit information.
Your DB was upgraded at some point from 9i or earlier.

SQL> select min(TIMESTAMP#) from aud$;

SQL> select max(TIMESTAMP#) from aud$;

When you select from the DBA_AUDIT_TRAIL view, you see all the records:

SQL> select max(timestamp) from dba_audit_trail;
26-APR-12 AM

The reason is that the TIMESTAMP# column in AUD$ was made obsolete in, but it still contains the old data.
The old data was migrated into the NTIMESTAMP# column but it will be missing the additional sub-second timings and the timezone:

SQL> select min(NTIMESTAMP#) from aud$;
04-MAR-04 PM

You should use the NTIMESTAMP# column in AUD$ to see the later audit records post 10g upgrade.
See MYOS note 427296.1 for more detail on this.

The DBA_AUDIT_TRAIL view uses the new NTIMESTAMP# column:

SQL> select max(timestamp) from dba_audit_trail ;
26-APR-12 AM

SQL> select min(timestamp) from dba_audit_trail;
04-MAR-04 PM

Thursday, April 26, 2012

Windows Batch File to Install Windows Task

@rem ######################################################################
@rem # Script:  install_task.bat
@rem # Params:  none.
@rem # Desc:    This script installs the task that will run the
@rem #           start.bat on the required schedule.  WinXP or Win2k3
@rem # Author:  D.Griffiths
@rem # Version: 1.0
@rem # History:
@rem #          09-June-2008 - DMG - Created.
@rem #
@rem ######################################################################

@set TN=My_TaskName
@set TR=C:\start.bat
@set ST=05:50:00

@set /P conf=Press return to install task "%TN%":

@echo Tasks currently scheduled...
@schtasks /Query /FO TABLE

@echo Removing existing task if already present...
@schtasks /Delete /TN %TN% /F

@echo Installing new task "%TN%"...
@schtasks /Create /RU SYSTEM /SC WEEKLY /D %DAYS% /TN %TN% /TR %TR% /ST %ST%
@IF ERRORLEVEL 1 goto invalid_task

@echo Task "%TN%" now scheduled.
@echo Tasks currently scheduled...
@schtasks /Query /FO TABLE
@set /P conf=Press return to exit.

@echo ERROR: There was a problem installing the task.
@echo ERROR: Please try again or check the command syntax.
@set /P conf=Press return to exit.

Monday, April 23, 2012


Whilst implementing a process for the “switch” deletion of XML messages from the SAP PI persistence layer you may end up, at some point, in a situation where the deletion job simply fails to run.

The contents of the job log state: “Repeat terminated copy job first”. No further information is displayed and there are no errors in the system log (SM21).

By executing the ABAP program RSXMB_DELETE_MESSAGES (same as the delete job step) in SE38, you can then double click the error at the bottom of the screen:

Finally you will get an explanation and a process for resolution:

Call transaction SXMB_MONI and choose Job Overview. Repeat the incomplete job. If the job cannot be completed due to database errors, first correct the database errors, and then repeat the job again.”.

It seems the jobs displayed using the "Job Overview" in SXMB_MONI is not simply a view of SM37 jobs, but a view of the underlying job control and enqueue function specifically for the messaging framework.

Tuesday, April 17, 2012

SAP PI - Persistence Layer Deletion - Oracle Stats

When configuring persistence layer deletion in SAP PI 7.0, you should be aware that the very first time you enable the “switch” procedure and execute the delete jobs, the new tables will be created in the database.

These tables (SXMS*2) will NOT have any database statistics on them.
Therefore, once the switch procedure is completed, until your stats gathering job is executed in DB13, your PI system may run very slow indeed!

Either schedule a one-off stats gathering after the very first “switch”, or schedule the stats gathering frequently (e.g. daily).

Sunday, April 15, 2012

Oracle 10.2 Data Types (oacdty)

Oracle 10.2 data types (oacdty) for use when you're (bravely) exploring an Oracle trace:

2     NUMBER
8     LONG
12   DATE
23   RAW
25   LONG UB2
26   LONG SB4
69   ROWID
96   CHAR or NCHAR
113 BLOB
178 TIME

Thursday, April 12, 2012

Negative Values in V$TEMP_SPACE_HEADER

Whilst trying to figure out a reliable method of monitoring temporary tablespace usage I came across this problem.
The v$temp_space_header bytes_used column was displaying negative values.

Metalink has document 467435.1 which states quite simply that in and later, the temp file header(s) may have been corrupted.

The solution, drop and re-create the temp file.

Tuesday, March 13, 2012

SAP FBL5N and Change Document Authorisation

After spending tedious amounts of time in SU24 and performing an authorisation trace I was unable to work out how users got “Change Document” access in FBL5N (Customer Line Item Display).

Access to change documents via transaction FBL5N, which is inherently a display only transaction, is controlled by giving transaction FB02 to the user.
You have to look in the source of program RFITEMAR:

Adding transaction FB02 (plus maintaining the subsequent authorisation activities) to a users role, as well as FBL5N, provides the “Change Document” button on the menu bar in FBL5N:

Thursday, March 08, 2012

SAP Performance FBL3n - Oracle Execution Plan Detail

Here’s a good reason to ensure that you always output (and read) the predicate information of the execution plan at the Oracle level and not just at the SAP level.

Inside the ST01 SQL Trace screen (in R/3 4.7 anyway), it doesn’t show how the predicates are accessed/filtered.

But in the Oracle view of the execution plan (I used autotrace, or DBMS_XPLAN) you can clearly see the “Predicate Information” section:

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 1 | 281 | 10 (0)|
| 1 | TABLE ACCESS BY INDEX ROWID| BSAS | 1 | 281 | 10 (0)|
|* 2 | INDEX RANGE SCAN | BSAS~Z2 | 1 | | 9 (0)|

Predicate Information (identified by operation id):
2 - access("MANDT"=:A0 AND "BUKRS"=:A1 AND "HKONT"=:A2 AND

This allowed me to see that the index BSAS~Z2 was not being accessed correctly due to a “feature” in the Oracle 10g CBO (see note: 176754, question #5) which SAP has documented as:

“If you specify indexed columns with LIKE, >, <, >=, <=, or BETWEEN, the system can no longer use the columns that follow to restrict the index range scan.”

After creating a new index BSAS~Z3 which contains exactly the same columns, but changing the order of the last two (BUDAT and XBLNR), I was able to get the execution plan to look like this:

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
| 0 | SELECT STATEMENT | | 1 | 281 | 1 (0)|
| 1 | TABLE ACCESS BY INDEX ROWID| BSAS | 1 | 281 | 1 (0)|
|* 2 | INDEX RANGE SCAN | BSAS~Z3 | 1 | | 1 (0)|

Predicate Information (identified by operation id):
2 - access("MANDT"=:A0 AND "BUKRS"=:A1 AND "HKONT"=:A2 AND

What’s the difference I hear you ask. Just look at the COST column.
In a table with 100 million records, the difference is about 20 seconds.
Filter predicates are not as efficient as access predicates (just ask Tom).

This example was fairly specific to transaction FBL3n performance, since I noted that whenever this transaction was used to query open items, it always adds a predicate of “BUDAT < 99991231” before any of the dynamic selection predicates. Bad SAP!

Thursday, March 01, 2012

Proving new Oracle Index in SAP Without a Transport

Let’s suppose you have created a new Oracle level index on a SAP table using SE11.
You’ve created it in DEV and proven that it changes the EXPLAIN PLAN using SQL Trace or Oracle level tools.

Unfortunately you don’t have a significant amount of data in your DEV instance, but you do have data in your UAT or integration test system.
You could release your workbench request that you will have created to create the new index in SE11, but this means a flurry of potential additional transports to remove it again if it doesn’t work.
You just want a proof of concept that can be removed again.

Since it’s entirely possible for you to create an index at the Oracle level below SAP, you just need to ensure that you create the index the same as SAP would.
You can choose a couple of methods to ensure this, but we will discuss the SAP biased method.

Using SE11, find the table on which the new index exists in DEV.

NOTE: We’re showing R/3 Enterprise, but it should be similar for Netweaver based systems.

Click the “Indexes...” button to list the indexes:

Double click the new index you created earlier.
Now on the menu, select “Utilities -> Activation Log”:

Click the magnifying glass icon next to the “Technical log for mass activation” option on the main screen.

Expand all the possible expansion icons:

Look for the text “CREATE” to find the create statement:

Copy the text and paste into SQL*Plus in any other SAP systems Oracle database to create the index.
You’re done.

Just remember that this index does not exist in the SAP data dictionary, so you must remove it once you have proven the EXPLAIN PLAN is working for the larger amount of data.

Monday, February 27, 2012

Running Oracle Production Database on VMware

NOTE: For research links, see my later post here.

Are you considering running Oracle production databases on VMware?
Obviously you've considered the Oracle support policy on this.
How likely is it that you will get asked to "make it physical"?  or How will Oracle support deal with your call if you tell them you're on VMware?

Well, there are some harsh responses from Oracle towards VMware customers if you have access to Metalink (sorry, My Oracle Support).
First you should read this doc: 1071005.1 (HOTSPOT ERROR DURING 32-BIT 11GR2 CLIENT INSTALL ON 64-BIT (X86_64) SUSE (VMWARE)).
Then read this document: 1075717.1 (Installing 32-bit RDBMS Client software on x86_64 Linux.)
I don’t think the first problem’s resolution is justified.  Do you?
I have found Oracle notes that state the following:

"(1) Make sure you are logged into the Server Console directly, and that you are NOT trying to install the patch over a remote connection (such as Terminal Services, Remote Desktop, Timbuk2, PCAnywhere, VNC, VMWare, etc.) "

"As for installing over a remote connection, we do not support this, because Oracle cannot control the way the permissions are setup, over the remote connection."

So VMware console is a remote connection...

Then there is this document:

This doc is an Oracle whitepaper which details SAP Netweaver / Oracle Databse 10gR2 RAC on Windows 2003.
In the doc it shows an example hosts file which is clearly from a VMware hosted server.  Double standards I feel!

This is just not a nice place to be if you're trying to convince a company that VMware is a solid, supported tool to run production databases, and that Oracle even support RAC on it now.  Maybe some of the notes are old, before the acceptance by Oracle that VMware is becoming big in many companies.  Or maybe Oracle's RAC support is an illusion of good will, whilst they quietly (or not so) improve the Oracle VM product.
The best way to tell, would be the acceptance of Oracle that VMware's vCPU allocation is acceptable as a form of hard partitioning, so that you can bring the Oracle DB license cost down by runnining on VMware, in the same way you can on Oracle VM.

Sunday, February 26, 2012

_fix_control bug

It turns out that the issue with the Oracle init param: "_FIX_CONTROL" preventing the database from starting (I blogged about this before), is bug 7509689: "_FIX_CONTROL" IS CAUSING TROUBLE DURING STARTUP.

It's fixed in

Friday, February 17, 2012

Use Oracle 10g Segment Advisor Usage

Generally, the Oracle 10g Segment Advisor collection job runs automatically out-of-the-box in an Oracle 10g install.
It's useful to run the advisor sometimes on large indexes as it should be able to report whether the index is efficiently storing index records, or if it could be re-built.
Oracle Enterprise Manager is already capable of pulling Space Advisor information from Oracle database.

Here are some links to the Oracle docs:

Oracle Doc: 10g Advisors
Oracle Doc: 10g Segment Advisor
Oracle Doc: Manually Running the Segment Advisor to Reclaim Wasted Space
An excellent whitepaper on pro-actively managing space in the Oracle 10g database.

The collector job (Automatic Segment Advisor) analyses segments in the database and compares to AWR reports to produce recommendations which can be reported on.
You can check the last run of the job using the SQL below:

-- Check the DBA SEGMENT ADVISOR job is collecting data.
COL actual_start_date FORMAT A20
COL run_duration FORMAT A15
COL job_name FORMAT A30
SELECT TO_CHAR(actual_start_date,'DD-MM-YY HH24:MI:SS') start_date,
FROM dba_scheduler_job_run_details
WHERE owner='SYS'
ORDER BY actual_start_date;

Once the collector is running, you can query the database using the DBMS_ADVISOR package to create analysis tasks that will query the recommendations and produce a recommendation report.

NOTE: The collector job does not analyse every object.

Below is the process I used to create a simple task to analyse a specific table and an index:

-- Create some variables to hold our task details.

-- Create a new empty task, which will populate the variables just defined.

-- Assign a table and index check to the new task (replace <TABLE NAME> and <INDEX NAME>).

-- Execute the task.

NOTE: On a 40GB table this took approximately 10 minutes.

-- Check the results in two ways:

-- or use DBMS_SPACE package (recommended).


-- Delete the task at the end.

Tuesday, February 14, 2012

Disabling Change or Deactivation of SAP Audit Logging SM19 - SAL

To disable changing or deactivation of the SAP Audit Logging (SAL) settings via transaction SM19, you can remove the authorisation object S_ADMI_FCD activity AUDA.

This would prevent “normal” BASIS administrators from being capable of changing or disabling audit logging, but permit you to provide an emergency user with this capability (you will need to change SAL settings in SM19 at some point!).

Sunday, February 12, 2012

ORA-01114 IO Error But What Is The Oracle File

Whilst creating a new 10GB index on a large 40GB table I was experiencing the
"ORA-01114: IO error writing block to file <file#> (block # <block#>)" error.

The main vital piece of information is the file number (<file#>). This can be found in the DBA_DATA_FILES or DBA_TEMP_FILES views as the FILE_ID column.

So, you construct a query to go find the file name:

SELECT file_id, file_name
FROM dba_data_files
WHERE file_id = <the number>
SELECT file_id, file_name
FROM dba_temp_files
WHERE file_id = <the number>;

NOTE: I query both just in case. I’ve worked on systems where the TEMP tablespace has not been created as “TEMPORARY”!

After running the query, you get “0 rows returned”.

Well, in this case, the TEMP tablespace was a proper TEMPORARY tablespace as the file_id was returned from the DBA_TEMP_FILES view.

I knew that the file numbers were somehow linked to the number of permitted files in the database.  This is controlled by the DB_FILES database parameter.
Looking at the Oracle documentation (for 10g) here:

You will see that Oracle states that:
...the DB_FILES initialization parameter indicates the amount of SGA space to reserve for datafile information and thus, the maximum number of datafiles that can be created for the instance.

In short this means that the DB_FILES parameter only applies to datafiles and not tempfiles.
Therefore, all other files will be outside this parameter range.

My parameter DB_FILES was set to 1024.
My reported file# in the ORA-01114 error was greater than 1024.
If you look at the file_id values for the DBA_TEMP_FILES view, you'll notice that the file_id numbers conflict with the file_id values in DBA_DATA_FILES.
Since the tempfiles exist outside the range of DB_FILES, I subtracted the value of parameter “DB_FILES” from the reported file#, to achieve the file_id for tempfiles belonging to temporary tablespaces.

Sure enough, when I subtracted 1024 (my DB_FILES value) from my reported file id in the ORA-01114, I got 2, which equated to the second tempfile in the TEMP tablespace.
The partition that the tempfile was sitting in was 95% full, but the tempfile couldn't expand by the requested next extent size as it would be more than the partition free space.  Increasing the partition size fixed my problem.

Is there another way of viewing the online file# vs the DBA_x_FILES file_id field?  I don't know, but if I find one, I'll post it here.

Wednesday, February 08, 2012


A background job has produced a short dump with SYSTEM_NO_TASK_STORAGE.
The “Heap Memory” section in ST02 showed a maximum use of 1.5GB, but we had allocated nearly 1.9GB according to the "abap/heap_area_nondia" and "abap/heap_area_dia" instance profile parameters.

The ST02 short dump analysis shows the source code line where the problem occurred. The line of code doesn't look specifically interesting, it is, however, requiring a slight increase in memory allocation.
If the program is running in DIALOG then it will be allocated Extended Memory first, followed by HEAP (local process) memory.
If the program is running in BACKGROUND then it will be allocated HEAP memory first, followed by Extended Memory.

Extended Memory is pre-allocated at system startup according to the EM initial setting in the SAP instance profile.  It is then increased up to the maximum specified in the instance profile per user and per application server.
HEAP memory is only allocated as it is needed within each of the SAP dw.* OS processes, up to the maximum specified (in instance profile) per user or per application server.  It is then released back to the OS when the process finishes processing (SAP restarts the work process) and it has used over a specific amount of memory as set in parameter abap/heaplimit.

Since this was a background job, we can assume that we exhausted HEAP memory and should have automatically switched to Extended Memory.
However, the Kernel section of the short dump showed that we may have experienced an issue obtaining more virtual memory from the OS:

*** Error in libunwind: Out of memory. Try with a higher value >
 > for UNWIND_RESERVE_MEM (current value = 16).
 (0) 0x40000000017ae480 [dw.sapXXX_DVEBMGS01]
 (1) 0x40000000017ae2b0 [dw.sapXXX_DVEBMGS01]
 (2) 0x40000000021de880 [dw.sapXXX_DVEBMGS01]
 (3) 0x40000000021e45d0 [dw.sapXXX_DVEBMGS01]
 (4) 0x400000000115e860 [dw.sapXXX_DVEBMGS01]
 (5) 0x400000000120b3a0 [dw.sapXXX_DVEBMGS01]
 (6) 0x40000000010542f0 [dw.sapXXX_DVEBMGS01]
 (7) 0x4000000001111940 [dw.sapXXX_DVEBMGS01]
 (8) 0x40000000011ae790 [dw.sapXXX_DVEBMGS01]
 (9) 0x40000000012fb090 [dw.sapXXX_DVEBMGS01]
 (10) 0x40000000012fd7d0 [dw.sapXXX_DVEBMGS01]
 (11) 0x400000000188cae0 [dw.sapXXX_DVEBMGS01]
 (12) 0x4000000001896e70 [dw.sapXXX_DVEBMGS01]
 (13) 0x4000000001891670 [dw.sapXXX_DVEBMGS01]
 (14) 0x40000000018949a0 [dw.sapXXX_DVEBMGS01]
 (15) 0x400000000187f1e0 [dw.sapXXX_DVEBMGS01]
 (16) 0x40000000014d1ce0 [dw.sapXXX_DVEBMGS01]
 (17) 0x400000000149e8d0 [dw.sapXXX_DVEBMGS01]
 (18) 0x4000000001496fc0 [dw.sapXXX_DVEBMGS01]
 (19) 0x4000000001364c30 [dw.sapXXX_DVEBMGS01]
 (20) 0x4000000000ed4af0 [dw.sapXXX_DVEBMGS01]
 (21) 0x4000000000ed4a90 [dw.sapXXX_DVEBMGS01]
 (22) 0xc000000000045880 main_opd_entry + 0x50 [/usr/lib/hpux64/]

Is this case, you could either add more OS memory (essentially you have over allocated memory somewhere), or you could resize the abap/heap_area_nondia parameter to a smaller amount, so that it will switch to EM sooner.

Wednesday, February 01, 2012

Documenting an SAP ABAP System Technical Configuration

The following are my thoughts for documenting the technical configuration of an existing SAP ABAP stack:

                ARCHITECTURE OVERVIEW                         
                OPERATING SYSTEM DETAILS                    
                DATABASE SYSTEM DETAILS                       
                                SAP SCHEMAS  
                                DB LINK INTERFACES     
                SAP KERNEL DETAILS                     
                SAP COMPONENT DETAILS                         
                ABAP STACK DETAILS                    
                                OPERATION MODES      
                                STANDARD SAP JOBS & SCHEDULES        
                                NON-STANDARD JOBS & SCHEDULES     
                                SPOOL SERVERS               
                                OUTPUT DEVICES            
                                SYSTEM USERS 
                                DATABASE CHECKS AND JOBS   
                                CCMS MONITORING     
                CUSTOM DEVELOPMENTS                          
                                PROGRAMS AND REPORTS         
                                FUNCTION MODULES   
                                                System Landscape Directory
                                                Central User Administration
                                                Solution Manager Data Collection
                                                Integration Server
                                                Web Services
                                                JDBC Connectors
                SNOTES AND REPAIRS