Using Google Chrome?

Download my
free Chrome Extension, Power Notes Searcher, to make searching for and evaluating SAP notes, much easier.

Recent Posts

Thursday, December 26, 2013

HowTo: Find Version of SAP BWA/BIA (Accelerator)

The SAP BWA (BW Accelerator) is based on the TRex search service and uses dedicated hardware to provide an additional in-memory index search capability for an existing SAP BW system.  NOTE: This is not to be confused with the SAP HANA DB, which is also in-memory, except that HANA is a more advanced and fully rounded product and not related to TRex.

Scenario: You may know there is a BWA connected to your BW system, but you don't know where it is and what version it is.  You may need to consider this information in preparation for an upgrade.
The BWA details can be seen from the BW system via transaction TREXADMIN.
The "Summary" tab shows all the revision details and the make and model of the dedicated hardware:


Additional version information can be seen on the "Version" tab, you can also see any additional load balancing nodes in the TRex landscape:


Connectivity to TRex is performed either via RFC server on the TRex server (BWA 700) or via the ICM (BWA 720+).
The TRex Web Service which can be accessed via "http://<trex server>:3xx05/TREX".
The "Connectivity" tab allows you to perform connectivity tests for RFC and HTTP to the BIA.
For RFC based connections, once registered at the gateway, you can see the detail in transaction SMGW (select "Goto -> Logged on Clients"):


You can see the TRex connections based on the "TP Name" column:


For ICM based connections, you will see the HTTP requests going out via the ICM in transaction SMICM.
For SAP notes searches, the component for the BWA is BC-TRX-BIA.

Thursday, December 19, 2013

SAP Software Provisioning Manager System Copy

From NetWeaver 7.0 onwards, you now need to use the SAP Software Provisioning Manager (SWPM) to perform system copies.
As an example, a production MS SQL Server database PX1 is restored from backup onto a training system database server.
After the restore is completed, you now have your TX1 database refreshed, but the schema inside the database is still the source system schema and there are post-copy tasks to perform.
What media is needed?
You will need the following media:
  • SWPM SAR file.
  • The 700 and 720 SL-Controller file updates are not needed for system copies (unless you are installing/re-installing a Java AS).
  • You need both the 720_EXT unicode and non-unicode Kernels for SWPM (they are specific Kernels located in the SWDC under the NetWeaver download location.  They specifically say "... for SWPM").  You will need the UC kernel even if your system is NUC!

  • You need the source system DDIC password for 000.
  • You need the password for the <sid>adm user.
  • You need the password for the SAPService<SID> (Windows) user.
  • The software media is around 4GB in size!
The SWPM performs approximately  the following tasks (I've added my own comments) taken from the detailed timings report at the end of a run.
I thought this might be useful to anyone new to the process:
  • Update System DLLs (not sure what DLLs, maybe VC++ Redist)
  • Re-Create users for SAP system (in database?)
  • Re-Install common system files (maybe SAPMMC files, HostAgent)
  • Configure database client (SNAC is already installed? maybe install MS JAVA JDBC JARs)
  • Cleanup of temp database objects.
  • Create/modify system database schema.
  • Migrate objects to new schema.
  • Delete old schema.
  • Set compatibility level for databases (sets to compatibility 100 for DB, tempdb, model, master)
  • Create SAP stored procedures.
  • Any MS SQL Server specific tasks (set PAGE_VERIFY to CHECKSUM, set AUTO_UPDATE_STATISTICS_ASYNC to ON, set ANSI_NULLS to ON, set AUTO_UPDATE_STATISTICS to ON, set AUTO_CREATE_STATISTICS to ON, configures FILESTREAM, reconfigures parallelism and min and max memory settings, enables xp_cmdshell.
  • Grant database access for the <sid>adm user,  SAPService<sid> user and SAP_<SID>_LocalAdmin group.
  • Configure database parameters.
  • Perform post-load activities.
  • Check DDIC password (client 000, if you don't have it right it prompts you to re-enter it here).
  • Perform table depooling (RUTPOADAPT)
  • Activate SQLScript Proxies (DBPROC_ACTIVATE_PROXIES)
  • Create DDL Views (RUTDDLSCREATE)
  • Restart instance.
At the end of the process, the schema is migrated and the system is ready to use.
There are some additional database specific tasks to perform, you need to check the SAP notes for your specific database instance.
See SAP Note 888210 - "NW 7.**: System copy (supplementary note)"

Thursday, December 12, 2013

SAP Unicode Conversion Sample DB Table Size

As an example, a non-Unicode SAP table (DBTABLOG) in an Oracle 11gR2 database (character set ALE32UTF16) without any Oracle compression, was ~80GB in total size (sum of table segments, excluding indexes).

Once this was exported to disk using R3load export, in preparation for a conversion to Unicode, it occupied ~70GB in the data files (DBTABLOG.00*).

Once this was re-imported into a new Oracle 11gR2 database with character set UTF8 (again, no compression) it occupied ~90GB (sum of segments, excluding indexes).

You must remember that this table is specific in it's usage.  It doesn't have any rows deleted from it, it's append only, so it should grow in a nice uniform manner and not be fragmented.  There may be other tables where you could save space.
If you notice that your R3load export files are significantly different in size compared to the Oracle size, then you could have some serious fragmentation inside your Oracle database.

Thursday, December 05, 2013

SWPM System Copy rdisp/msserv

Scenario: During a SAP NW731 system copy, the SWPM prompts you to set the rdisp/msserv and rdisp/msserv_internal parameters in the SCS instance profile.


The problem is, these are already set in the SCS instance profile.  You can clearly see them there.

The solution is to ensure that the same parameters and values are also set in the DEFAULT.pfl profile file too.  This is not so clear, but it is mentioned in SAP note 1421005.

Thursday, November 28, 2013

SAP SWPM Checks Windows Virtual Hostname Setup

When you come to use SWPM for the first time to copy a SAP system which has a virtual hostname defined and used, be aware that the SWPM has in-built checks to ensure that the Windows registry is correctly configured according to SAP note 1564275.

If you've not fully implemented the note configuration in the Windows registry, even if you're using the "SAPINST_USE_HOSTNAME=<hostname>" SAPInst.exe parameter, the SWPM will show an error and exit.

My specific setup was missing the DisableStrictNameChecking setting.  After adding this into the registry, I was able to launch SWPM.

Sunday, November 24, 2013


During the SAP HANA install, you get an error in the setup.glf log file for hdbnsutil failing with return code 6 (signal 6).

The error could be caused by the call of "hdbnsutil -initTopology" which creates the system and the initial license.   An out of memory error could occur if this is a VM with less than ~5GB of RAM allocated (hdbnsutil tries to allocate 4GB of memory).

You can check if this is the cause of the error, as hdbnsutil will produce a trace dump in the directory "/usr/sap/<SID>/HDB<##>/<host>/trace/".
Check the trace files "nameserver_<host>.*.*.trc" for the error "OUT OF MEMORY".
If this is the case, increase the memory to the VM and restart.

Thursday, November 21, 2013

SAP Solution Manager Central J2EE Monitor Bug

There is a small bug in the ABAP code when registering a SAP system to be monitored in a central system (e.g. Solution Manager) in transaction RZ21.
This was found on a Solution Manager 7.1 system.

The scenario is:
You try and register a remote monitor entry in RZ21, you enter the message server details of the system, the password for the CSMREG and the <sid>adm user, then click Save.  The screen gets to the "generating logical ports..." section but you notice that it mentions a non-existent instance number for the system it's generating the ports for e.g. instance 00, instead of 01.
At the end of the process, you get an error about invalid XML or an XML error in the error message panel.

The reason:
If the SAP system to be monitored, has an instance which is using a HTTP port that is less than 3 digits (e.g. port 80), then the ABAP code in the Solution Manager, incorrectly determines the instance number of the instance to be “00”.

To get around the issue, the ABAP debugger had to be used to manually adjust the instance number prior to it being saved.

Create a breakpoint in the code as follows.
Use SE38 to display the source of include LSALWS_CONFIGF01:


Scroll to line 1276:


Click to add a new session breakpoint:



Then, before you click “Save” in RZ21 after entering the message server details and passwords, type “/h” into the command bar:


Then click Save in RZ21.
Once the breakpoint line is reached, the debugger will be launched and displayed.
Change the value for the variable “ws_instance_list-inst” from “00” to the correct instance values (in our case it was “01”), then continue the execution from the debugger: and the entries will be created correctly.

For systems with multiple instances, you need to change both instances during the loop so ensure that you do not remove the session breakpoint until the save process is completed.

If you have tried to register the system in RZ21 before, and it failed, there may be an invalid entry now present in the Topology view in RZ21.
Simply delete the invalid segment from RZ21.

Thursday, November 14, 2013

Overview of Tasks for SAP NW731 System Copy - Java

Below is an overview of the tasks associated with an SAP NW731 system copy for Java on Windows with MS SQL Server (see the ABAP tasks here).  Essentially, this is what I document when I go through the process:
  • Current Base Details (Kernel version etc)
  • Current Profile Files
  • Source Server IDs (Java node IDs)
  • Current J2EE License
  • Current SSL Certificates (export PSEs)
  • Current Database Files
  • Source Database Files
  • Source Disk Usage
  • Target Disk Capacity
  • Current SQL Server Version
  • Current Windows Hotfixes
  • Detach Old Target Database
  • Delete Old Database Files
  • Create Additional Data File Locations
  • Restore Database as Target
  • Rename Logical Files (MS SQL Server)
  • Deploy SWPM
  • Start Java SCS
  • Launch SWPM
Follow standard system copy doc:
  • Adjust server Ids (Java node Ids)
  • Restart Central Instance
  • Remove Old SSL Key Store Views
  • Re-Generate Certificates
  • Re-Configure SSO
  • Database Stats Collection
  • Configure & Schedule Database Backup
  • Adjust Default Database Connection for <sid>adm User
  • Post Process Tasks
  • Release VM Snapshot (or send tapes offsite)
  • Adjust AS-Java Instance VM Settings (de-tune memory)

Tuesday, November 12, 2013

SAP HANA "ram or cpu check failed"

When running the SAP HANA 1.0, you may see the error "RAM or CPU check failed".
If you're trying to run HANA in less than 4GB of memory, then you will not be able to. 
This is a hardcoded check value.

Thursday, November 07, 2013

Overview of Tasks for SAP NW731 System Copy - ABAP

Below is an overview of the tasks associated with an SAP NW731 system copy for ABAP on Windows with MS SQL Server (see the Java tasks here).  Essentially this is what I document when I go through the process:

  • Current Base Details - Current Kernel version etc.
  • Current Profile Files
  • Current ABAP License
  • Current SSL PSEs (export PSEs)
  • Current Transport Management System Config
  • Current Database Files
  • Source Database Files
  • Source Disk Usage
  • Target Disk Capacity
  • Current SQL Server Version
  • Current Windows Hotfixes
  • Download SWPM
  • Download Kernel 7.20 SWPM Install Media (UC + NUC)
  • Upload Required Media to Target Server
  • Identify Source Database Backup Media
  • Note Last Backups of Target Database
  • Shutdown Target SAP System
  • Snapshot of Server VM (or Full Server Backup)
  • Detach Old Target Database
  • Delete Old Database Files
  • Create Additional Data File Locations (that don't exist in target)
  • Restore Database as Target
  • Rename Logical Files (MS SQL Server)
  • Deploy SWPM
  • Launch SWPM
Follow on in standard copy doc:
  • Apply Tasks in SAP Note 1817705
  • Truncate specific Tables
  • Check Installation
  • Stop Background Jobs
  • Remove RFC Connections
  • Lock and Adapt Printers & Spool Servers
  • Execute RSPO0041 to Remove Spool Requests
  • Execute RSBTCDEL2 to Remove Job Logs
  • Check Background Job Servers
  • Import Profiles (RZ10)
  • Check Operation Mode (RZ03)
  • Check Logon Groups (SMLG)
  • Check RFC Server Groups (RZ12)
  • Check tRFCs (SM58)
  • ReCreate PSE (STRUST)
  • Change Logical System Name (BDLS)
  • Check Custom External Commands (SM69)
  • Re-enable Background Jobs
  • Schedule Database Check (DB13)
  • Configure STMS
  • Database Stats Collection
  • Re-Install License Key
  • Migrate SecStore (SECSTORE)
  • Access SM21
  • Configure & Schedule Database Backup
  • Adjust Default Database Connection for <sid>adm User (MS SQL)
  • Release VM Snapshot (or send tapes offsite)

Thursday, October 31, 2013

SAP Unicode Conversion MAXDOP Parallelism on MS SQL Server

When performing a Unicode conversion of an SAP system running on MS SQL Server database, you get pointed to the Unicode Collection Note 1319517.
This note points to a MS SQL Server specific note (Note 1054852 - Recommendations for migrations to MS SQL Server)  which covers a couple of different scenarios, such as moving to SQL Server from another database provider, or just performing an R3load export/import (like me) to the same platform.

Now from experience, I know that the R3load import is actually pretty quick (especially on Oracle with DIRECTPATH).  What takes the time is the index creation afterwards.  Lots of scanning and I/O needs to be done.  This is where you waste time, and where you could save time.
The note 1054852 mentions the use of the MAXDOP (Maximum Degree of Parallelism) SQL Server parameter that could benefit any subsequent index creation/rebuild tasks performed during an R3load import.
The recommendation in note 1054852 is to change the MAXDOP from the SAP recommended setting of 1, to 4 for the entire SQL Server instance.  NOTE: The maximum MAXDOP in 2008R2 is 1024 (see here and SAP note 1654613).
This is the "hammer" approach and should be used with caution.  There is a nice blog here on the use of MAXDOP with SQL Server.  The blog shows how setting this to a value greater than 1 can actually increase fragmentation.  This is completely understandable.  However, the reader must note that the fragmentation is only an issue if the index is specifically set with ALLOW_PAGE_LOCKS to "OFF" (the default in 2008R2/2012 is "ON" (see syntax here)! ).
There is another blog article that shows how this fragmentation problem is overcome by setting the ALLOW_PAGE_LOCKS option.  The article states that by default this is "ON".  However, according to Microsoft KB 2292737, the default is "OFF" by design. 
So which is it?  In fact, the MS syntax for "ALTER INDEX" specifically states that it is not possible to reorganise an index with ALLOW_PAGE_LOCKS set to "OFF".

Here's how to check the value of the ALLOW_PAGE_LOCKS setting for an index (there is no global setting):

use <SAP DB>

select name,type,allow_page_locks from sys.indexes
where allow_page_locks != 1
order by 1;

And the results...  well, some of the largest SAP tables in my system have their indexes set with ALLOW_PAGE_LOCKS to "OFF".  Great!


I can understand the VB* tables might not want index locking and this is also hinted at in the older SAP R/3 Performance Tuning Guide for Microsoft SQL Server 7.0 web page.  However, where did the other tables come from?
I took a look on SAP Notes but I was unable to find anything definitive.  The system I was working on was recently copied and used SWDM (SAP Software Deployment Manager) to perform the post-copy steps, so it's possible that the indexes were automatically adjusted (ALTER'd) to try and ensure a consistent approach.
What to do next?  Well, some of those tables can be ignored since they are supposed to have ALLOW_PAGE_LOCKS set to "OFF", some, like REPOLOAD are re-populated only during a system-copy with R3 tools (e.g. Unicode conversion), so you could try adjusting the setting during the R3load import. 
The others, well in theory you would try to minimise data in those tables (like TRFC*) before you perform a Unicode conversion, so the indexes wouldn't be massive anyway.

At the end of the day, all we are talking about here is a little fragmentation.  So let's move on.

Let's go back to the MAXDOP setting recommendation mentioned in SAP note 1054852. 
Now, my system happens to be a BW type system (it's actually SEM, but this is just BW in disguise), so I found SAP Note 1654613 - SQL Server Parallelism for SAP BW which suggests that for SAP BW systems, you can now manage the MAXDOP parameter settings in table RSADMIN through report SAP_RSADMIN_MAINTAIN by setting parameters MSS_MAXDOP_QUERY, MSS_MAXDOP_<cubename> and MSS_MAXDOP_INDEXING.
The note 1654613 also goes on to say that by applying the note corrections (or the related support package stacks) the default MAXDOP for BW queries is set to 2, and for (process chain) index creations is set to 8.
So the Unicode collection note's setting of 4, could actually contradict the setting of 8 in BW systems!
The note 1654613 also states that you may increase the MAXDOP for queries to more than 4, but this depends on the number of worker threads configured in your SQL Server instance.
The worker threads setting in my SQL Server 2008R2 instance was set to "0", which means the number is dynamically calculated (see here).

You can use the sp_configure procedure to check your setting:

sp_configure @configname='max worker threads';

You can query the current number of threads (thanks to

select count(*) from sys.dm_os_threads;

My idle database was using 50 threads on a VMWare VM with 4 processors.  So I guess I could increase the MAXDOP for queries permanently in my BW (SEM) system.

You should also note the setting MSS_MAXDOP_INDEXING = 0 means that all available threads will be used during (process chain) index creation.

We are advised to set MAXDOP to 4 when moving an SQL Server database or performing a system-copy or Unicode conversion. 
However, more detailed analysis has shown that for BW systems specifically, we can potentially adjust the MAXDOP setting even higher than 4 during our R3load import, to ensure that we make the best use of the available CPU. 
This is underlined by the fact that defaults within the BW system are sometimes higher than the recommended setting of 4.
Therefore, I will be trying out a value of 12 (default of 8 for index creation + 50%) in my Unicode conversion:

sp_configure 'max degree of parallelism', 12;
reconfigure with override

Thursday, October 24, 2013

SAP Netweaver 731 Oracle Create DB Statement

By default, when you use the Software Provisioning Manager (SWPM) to create a new NW731 Oracle database, it will generate and run an Oracle "CREATE DATABASE" statement as follows:

SYSAUX DATAFILE '/oracle/DB1/sapdata1/sysaux_1/sysaux.data1' SIZE 200M REUSE AUTOEXTEND ON NEXT 20M MAXSIZE 10000M
LOGFILE GROUP 1 ('/oracle/DB1/origlogA/log_g11m1.dbf') SIZE 200M  REUSE ,
GROUP 2 ('/oracle/DB1/origlogA/log_g12m1.dbf') SIZE 200M  REUSE ;

Notice that both the character set and national character set are UTF8.

Thursday, October 17, 2013

SAP R3load Error TOC is Not From Same Export

During an R3load import process, you are importing the data files generated from a successful R3load export.
However, you are seeing errors in the <PACKAGE>.log file along the lines of:
(DB) INFO: connected to DB
(RFF) ERROR: <PACKAGE>.TOC is not from the same export as <PACKAGE>.001

This occurs when the TOC (table of contents) file is corrupt.
The TOC file is generated during the export process and contains the name(s) of the tables in the export package and the number of records inside the relevant data files (files ending with .nnn   e.g. .001).
The corruption can happen if you terminated an export, or an export failed (maybe because of disk space), and you then restarted the export (either through the SUM, SAPInst or by manually editing the .properties file).
If you failed to remove the already generated .TOC file for the failed package, before restarting the export, then the .TOC file will be confused and think that the subsequent export, is an append operation to the existing data file.
A normal .TOC file should have something like:

vn: R7.20/V1.6
id: de1a50a500000043
cp: 4102
tab: [HEADER]
fil: <PACKAGE>.001 1024
1 1
eot: #0 rows 20130902160509
tab: <TABLE>
fil: <PACKAGE>.001 1024
2 1024000
eot: #50212404 rows 20130902184553

eof: #20130902184553

A corrupt .TOC file for the same package, would look something like:

vn: R7.20/V1.6
id: de1a50a500000043
cp: 4102
tab: [HEADER]
fil: <PACKAGE>.001 1024
1 1
eot: #0 rows 20130902160509
tab: <TABLE>
fil: <PACKAGE>.001 1024
2 1024000
eot: #50212404 rows 20130902184553
tab: <TABLE>
fil: <PACKAGE>.001 1024
1024001 2048000

eot: #50212404 rows 20130903161923

eof: #20130903161923
Notice the additional four lines generated in the file during a second export attempt.
This would cause the import to fail.
It's not possible to adjust the .TOC file manually, as it seems that the .TOC file and the data files are tied with a checksum somehow.
The only time you will find out that the export .TOC files are corrupt, is when you try to import them.  Maybe SAP could write a verification routine into the R3load program.

Wednesday, October 16, 2013

SAP BRTools for SQL Server

There are no BR*Tools (BRbackup, BRarchive etc) or equivalent when running SAP on Windows on SQL Server.
The SAP system uses built in access to the SQL Server database to call "DBCC" related tasks.

Monday, October 14, 2013

SAP Inside Track - Munich 2013 - ABAP Future

On Saturday I attended the SAP Inside Track Munich 2013 event hosted in Munich.
Although my main experience is around BASIS and underlying technologies, the event proved to be very insightful into the future directions of some of the technology.

Some of the presentation slides are now available on the main page:

Summary of the event for me:
  • ABAP 7.40 will be the release for HANA integration with ABAP.
  • Eclipse will be the future direction for ABAP development, starting with the HANA integration.  Say bye bye to SE80 and SE38.
  • Some new features in the syntactical makeup of ABAP programs means better performance and cleaner code through the introduction of things like inline views.
  • The linkup of ABAP with HANA means that developers can push-down certain new OPEN SQL commands to the HANA layer, instead of returning loads of records and iterating on an in-memory ABAP table.
  • SAP have around 43,000 students from the Munich universities working on a range of SAP products and features.
  • The Eclipse based development tools use the REST protocol for comms via the SAP ICM instead of via RFC like the SAP GUI.
  • NW7.40 includes an improved SQL Trace for HANA.  These new tools are accessible through the new SWLT transaction.
  • Free HANA related courses are available through and
  • The recommended version of 7.40 is SP5, since this includes better integration with the SAP transport management system and removes the need for HANA containers as transports.
Thanks to all the speakers at the event, it was very enjoyable, especially the free beer at the end!

Thursday, October 10, 2013

Checking R3load Export Progress

When running R3load to export an Oracle SAP database, it's difficult to see the exact table or tables that is/are being exported.

You can log into the Oracle database during the R3load execution and use the following SQL to follow the progress:
SQL> select sess.process, sql.sql_text
       from v$session sess,
            v$sqltext sql
      where sess.type='USER'
        and sess.module like 'DBSL%'
        and sql.sql_text like '%FROM%'
      order by sql.part;

This will show the OS process ID of the R3load process, plus the table (from the FROM clause)  that is currently being exported.
For large tables, you may be able to see the progress in the V$SESSION_LONGOPS table by looking for rows where TOTALWORK != SOFAR.

Thursday, October 03, 2013

Kill All Oracle Sessions For Specific User - 11gR2

Here's a small script to display, then kill all sessions in an Oracle 11gR2 database for a specific username.

First let's check the sessions of the specific username:

NOTE: Change "<USER NAME HERE>" in both blocks of code, for the actual username to be killed.

SQL> set serveroutput on size 10000;
SELECT sid,serial#
  FROM v$session
WHERE username = '<USER NAME HERE>';

Now run the code to kill the user sessions:

   dummy NUMBER;
   c     NUMBER := dbms_sql.open_cursor;
   FOR c1_row IN (SELECT sid,serial#
                  FROM v$session
                  WHERE username = '<USER NAME HERE>') LOOP
      DBMS_OUTPUT.PUT_LINE('Killing session: '||c1_row.sid||' serial#: '||c1_row.serial#);
      DBMS_SQL.PARSE(c,'alter system kill session '||''''||c1_row.sid||','||c1_row.serial#||'''',dbms_sql.NATIVE);
      dummy := DBMS_SQL.EXECUTE(c);
      DBMS_OUTPUT.PUT_LINE('Session: '||c1_row.sid||' serial#: '||c1_row.serial#||' killed.');


You can now run the check code again to see that all sessions for the user have been killed.

Thursday, September 26, 2013

SAP Kernel Missing

When trying to start a SAP system I got an error from the sapstart.log indicating that was missing.
This was not in any of the exe directories or in any of the Kernel distribution files.

During an upgrade, a kernel was patched with a unicode kernel when it should have been a non-unicode kernel.
The correct kernel patch was then deployed into the central exe directory, but it looks like sapcpe did not correctly detect and replace the kernel files on the other instances.

The solution to the missing problem, was to completely remove the kernel files in the instance exe directories, then manually run sapcpe (sapcpe pf=<instance pf>) to re-copy the files from the central exe directory.

This fixed the issue.

Monday, September 23, 2013

Find RMAN Backup Statistics

You can query the view V$BACKUP_SYNC_IO (for synchronous tape devices) to obtain the average MB transfer speed from RMAN to the tape device (or intermediary software if using the obk interface):

SQL> select avg(EFFECTIVE_BYTES_PER_SECOND)/1024/1024 MB_per_s
       from V$BACKUP_SYNC_IO
      where DEVICE_TYPE='SBT_TAPE';


Thursday, September 19, 2013

SAP Unicode Conversion Nametab Inconsistency

During a unicode conversion of a SAP NW731 system, I saw a problem where a number of BI FACT tables (/BIC/E*) were present in the SAP nametab, existed in the Oracle database, but they didn't exist in the DDIC (SAP data dictionary visible in SE14).

SPUMG nametab

I asked the BI administrator to confirm that these tables were not referenced in the BI cubes, and they weren't.  He suggested that these tables used to belong to a cube that was long since deleted.  This means that at some point there must have been a program bug that has left the nametab inconsistent with the DDIC.
There are no SAP notes about what to do in a situation like this, but there are two options:
1, Exclude the tables from the unicode conversion in transaction SPUMG by adjusting the exceptions list.
2, Manually adjust the SAP nametab.
I chose option 2, since this was the cleanest option and would hopefully leave the system in a better state for future updates.

I found SAP note 29159 contained some useful information on a similar subject.  The note suggested writing some simple ABAP code to delete these tables from the SAP nametab tables DDNTT and DDNTF.

Whilst this was simple enough, I decided that I didn't need to go as far as writing ABAP.  I manually removed the entries at the database level using SQL:

SQL> delete from sapsr3.ddntt where tabname ='<TABLE>';
SQL> delete from sapsr3.ddntf where tabname ='<TABLE>';

Then restarted the system (or you could sync the nametab buffer with "/n$NAM").
This fixed the issue and allowed the unicode conversion to continue.

UPDATE: I've since found that it's possible to list the contents of the Nametab buffer and delete specific tables from the buffer using the function modules DD_SHOW_NAMETAB and DD_NAMETAB_DELETE.

Thursday, September 12, 2013

SAP R3load table splitter - Table Analysis Performance

Be careful when using the R3load table splitter from the Software Provisioning Manager screens.
You are asked to supply your table split file, in the install guide, in the format "<TABLE>%<# SPLITS>".
However, this does not supply the table column to split by.

When splitting large tables, during the Table Splitting preparation phase (before you even start exports), R3ta can run for quite a while whilst is performs scans of the available INDEXES and then COUNTs the number of rows in the table(s) to be split.

It's trying to define the most apt column(s) to use for generating the WHR files which contain the query predicates.

I tried adding a specific column during the initial table splitter screens, where you can specify the column to use. However, this seems to be completely ignored.

The best advice, is to prepare your table split during your proof phase in the sandbox environment, then potentially manually adjust the final WHR file to account for any additional rows in the table(s) to be split.
This will save a lot of time and effort in the true production conversion run.

Also, ensure that the indexes on those tables, especially the ones that the WHR predicates refer to, are rebuilt if possible.

Thursday, August 01, 2013

HowTo: Use Oracle BBED to adjust DB Name in File Headers

HowTo: Use BBED to hack the database SID in the datafiles if you've gone and got them all mixed during a "CREATE CONTROLFILE" operation.

WARNING: Using BBED is not supoprted by Oracle unless you are asked to use it by Oracle Support themselves.

Use UNIX vi to create a text file that contains a line number, followed by the file name for the DB files that need changing:

# cat <<EOF > filelist.txt
1 /db/ora/system/system1.dbf
2 /db/ora/data1/data1.dbf
3 /db/ora/index1/index1.dbf

Save the file as "filelist.txt".

Launch bbed (blockedit) as the Oracle DB UNIX owner.
Change the text "NEWID" for your new DB name in the "modify" line below.

$ bbed
BBED> set listfile 'filelist.txt'
BBED> set mode edit

# Dump the current block value for datafile #1 in your list file.
# exmaple: BBED> dump /v dba <file#>,<block> ...

BBED> dump /v dba 1,1 offset 32 count 16

Make the swap:

BBED> modify /c NEWID file 1 block 1 offset 32

The checksum is now invalid:

BBED> sum file 1 block 1

Force save the new checksum:

BBED> sum file 1 block 1 apply

Verify the block:

BBED> verify file 1 block 1

Once you’ve done all your files:

BBED> quit;

Start the database with the CREATE CONTROLFILE SET DATABASE "NEWID"...

Friday, July 26, 2013

Network with OEL 5.7 x86_64 install in Hyper-V

When installing Oracle Enterprise Linux 5.7 x86_64 in a Hyper-V 2012 VM, the Linux networking refuses to work with the Hyper-V "Legacy driver" if you have the UEK (unbreakable enterprise kernel) enabled and more than one vCPU.

First, you should always ensure that you add the Hyper-V "Legacy Network driver" to the VM container at the VM creation time to ensure that it will work when you come to install OEL in the VM.

Then, to get around the problem with the networking and vCPUs, disable the UEK kernel and shutdown, then you can add more than one vCPU to the VM.

Thursday, July 18, 2013

Corrupt OEL 5.7 ISO Prevents Boot into Installer

I ran into this little problem whilst trying to install OEL 5.7 into a Hyper-V environment.

"Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(xxx,xx)".

I tried all different manner of parameters with "linux xxxxx" as recommended by the installer.
None of these worked.
It looked like the Hyper-V drivers weren't working at first.

So I re-downloaded the OEL 5.7 ISO, and re-attached to the VM cd-rom.
Then it worked!
Must have been a corrupt OEL 5.7 ISO that prevented it booting into install/setup from Hyper-V.

Friday, July 12, 2013

HowTo: OEL/RHEL 5.7 Create New VolGroup and LVol for new disk

If you need to add an additional mount point onto a RHEL or OEL Linux server, here's how to do it using the logical volume manager for maximum flexibility:

We assume that you've added a new physical disk and that it's called /dev/sdb.

First check size of the device to ensure you've got the correct one:

# fdisk -l /dev/sdb

Now create a new primary partition on the disk:

# fdisk /dev/sdb
n        (new partition)
p        (primary partition)
1        (partition number)
<return> for 1st block
<return> for last block
w       (write config)
q       (quit)
Check you can see the new partition:

# ls -la /dev/sdb*

(You should see /dev/sdb1)

Now ensure that you create a new physical volume that the volume manager can see:

# pvcreate /dev/sdb1

Physical volume "/dev/sdb1" successfully created

Create a new Volume Group containing the new physcial partition:

# vgcreate VolGroup01 /dev/sdb1

Volume group "VolGroup01" successfully created

Create a new logical volume inside the volume group:

# lvcreate -L 480GB -n LogVol01 VolGroup01

Logical volume "LogVol01" created

Format the new logical volume using EXT3 (you can choose which version of EXT you want):

# mkfs -t ext3 /dev/VolGroup01/LogVol02

Now you just need to mount the partition up.

Thursday, June 27, 2013

Oracle 11g Transparent Data Encrpytion (TDE) Tablespace Conversion Script

After needing to configure and setup an Oracle tablespace in an 11g database so that it was encrypted with Transparent Data Encryption (TDE), I devised a scripted method of doing this quickly and simply.

The script is hardcoded to only work on a tablespace called "USERS_TDE" at the moment, but if you find and replace this for your specific tablespaces. I've even included commented out code showing how to do more than one tablespace at a time.

You should also change the disk locations within the script for the location of the export/import dump files and the wallet file location.
As per the Oracle docs, the wallet file is stored outside of the Oracle home in /etc/ORACLE.

The basic process is:
  • Create wallet dir.
  • Adjust sqlnet.ora.
  • Create dpump dir.
  • Export tablespace data.
  • Export tablespace DDL metadata.
  • Adjust tablespace DDL to include TDE commands.
  • Enable TDE encryption at database level (sets the enc key).
  • Offline & drop existing tablespace.
  • Create TDE version of tablespace.
  • Import tablespace data.

# Author: D.Griffiths
# Script:
# Params: None.
# RunAs:  Oracle home owner.
# Desc:   Script to automatically export a tablespace called USERS_TDE

#         (change as you wish), drop it, re-create it ENCRYPTED with TDE
#         then re-import the data objects back into the tablespace.
# HISTORY ###################################################################
# v1.0, Created.

# Variable definitions.
dp_dir="/data/backup/tblspace_export_pre_TDE/$ORACLE_SID" # Datapump export/import dir.

# Check required variables.
if [ -z "$ORACLE_SID" -o `grep -c "^$ORACLE_SID:" /etc/oratab` -ne 1 ] ; then
   echo "ERROR: Invalid ORACLE_SID."
   exit 1;

if [ -z "$ORACLE_HOME" ] ; then
   export ORACLE_HOME = "`awk '/'$ORACLE_SID'/ { split($0,a,":"); print a[2] }' /etc/oratab`"

   if [ -z "$ORACLE_HOME" ] ; then
      echo "ERROR: Failed to set ORACLE_HOME."
      exit 1;

# Show setup.
echo "------------------------------------------"
echo "Starting TDE setup script."
echo "The following has been defined:"
echo " WALLET DIR : $wallet_dir"
echo " DATAPUMP DIR: $dp_dir"
echo "------------------------------------------"
echo " "

# Check if DB already has wallet/encryption.
retval="`sqlplus -s \"/ as sysdba\" <<EOF
set head off
set newpage none
set feedback off
set trimout on
set tab on
select DECODE(count(status),0,'NONE','SOME') from v\\\$wallet;

if [ "$retval" != "NONE" ] ; then
   echo "ERROR: Encryption may already be enabled on this database."
   exit 1;
   echo "Encryption is not already enabled on this database."

# Check for existence of Wallet Dir.
if [ ! -d $wallet_dir ] ; then
   mkdir -p $wallet_dir
   if [ $? -ne 0 ] ; then
      echo "ERROR: Failed to create wallet dir $wallet_dir"
      exit 1;

   echo "WALLET directory created: $wallet_dir"
   echo "WARNING: Wallet directory already exists:"
   echo "$wallet_dir"
   echo -n "Continue to use this? {Y|N} : "
   read reply
   if [ "$reply" != "Y" -a "$reply" != "y" ] ; then
      exit 1;

   echo "Using wallet dir: $wallet_dir"

if [ ! -d $dp_dir ]; then
   mkdir -p $dp_dir;
   if [ $? -ne 0 ] ; then
      echo "ERROR: Failed to create datapump dir $dp_dir"
      exit 1;
   echo "ERROR: Datapump dir already exists, files can not be overwritten."
   echo "Check: $dp_dir"
   exit 1;

# Check sqlnet.ora file.
if [ "`grep -c '^ENCRYPTION_WALLET_LOCATION' $ORACLE_HOME/network/admin/sqlnet.ora`" -ne 1 ] ; then
   echo "Adding ENCRYPTION_WALLET_LOCATION to sqlnet.ora in $ORACLE_HOME/network/admin/sqlnet.ora"
   cat <<EOF >>$ORACLE_HOME/network/admin/sqlnet.ora
# Added WALLET location for TDE see 1228046.1.
(DIRECTORY = ${wallet_dir}/)))

   if [ $? -ne 0 ] ; then
      echo "ERROR: Failed to update sqlnet.ora."
      exit 1;
   echo "Section ENCRYPTION_WALLET_LOCATION already exists in sqlnet.ora"

# Get list of unencrypted non-system tablespaces
list_tblspace="`sqlplus -s \"/ as sysdba\" <<EOF
ttitle off
btitle off
set newpage none
set feedback off
set head off
set wrap off
set trim on
set tab on
set linesize 30
set pagesize 1000
-- select rpad(UPPER(,30)
-- from v$tablespace t
-- where not in ('SYSTEM','SYSAUX','UNDO','TEMP')
-- and t.ts# not in (select ts# from v$encrypted_tablespaces);
select rpad('USERS_TDE',30) from dual;

# Confirm tablespaces.
echo " "
echo "The following non-system unencrypted tablespaces have been found:"
echo "$list_tblspace"
echo -n "Setup these for TDE? {Y|N} : "
read reply

if [ "$reply" != "Y" -a "$reply" != "y" ] ; then
   exit 1;

# Check dbdump oracle directory doesnt exist.
retval="`sqlplus -s \"/ as sysdba\"<<EOF
set head off
set newpage none
set feedback off
set trimout on
set tab on
select trim(directory_path) from dba_directories
where directory_name='DPDUMP_BACKUP';

if [ -n "$retval" ] ; then
   echo "WARNING: Oracle directory DPDUMP_BACKUP already exists."
   echo "It is assigned to path: $retval"
   echo "OK to recreate it to $dp_dir"
   echo -n "Enter {Y|N} : "
   read reply
   if [ "$reply" != "Y" -a "$reply" != "y" ] ; then
      exit 1;

# Create the datapump export/import dir in oracle.
sqlplus -s "/ as sysdba" <<EOF
create or replace directory dpdump_backup as '$dp_dir';

echo "------------------------------------------"
echo "Exporting tablespaces and generating DDL."

# Export the tablespaces to the backup directory location and generate DDL.
echo "$list_tblspace" | while read tablespace
   expdp userid="'/ as sysdba'" dumpfile=$tablespace.dmp directory=dpdump_backup logfile=${tablespace}_exp.log tablespaces="$tablespace"

   # Generate the tablespace creation DDL to the backup directory location.
   sqlplus -s "/ as sysdba" << EOF
SET LONG 10000
set head off
set newpage none
set trimspool on
spool $dp_dir/cre_$tablespace.sql
SELECT dbms_metadata.get_ddl('TABLESPACE','$tablespace') FROM DUAL;

   # Adjust the DDL file to include the ENCRYPTION string.
   cat $dp_dir/cre_$tablespace.sql | grep -e '^.*[A-Z0-9]' > $dp_dir/cre_TDE_$tablespace.sql

   echo " ENCRYPTION using 'AES256'" >> $dp_dir/cre_TDE_$tablespace.sql
   echo " STORAGE (ENCRYPT)" >> $dp_dir/cre_TDE_$tablespace.sql


########### ENABLE ENCRYPTION FROM HERE ON IN ################################

while [ -z "$walletpw" ] ; do
   echo -n "Enter the wallet encryption key to use: "
   read walletpw
   echo -n " Re-enter the wallet encryption key: "
   read reply
   if [ "$walletpw" != "$reply" ] ; then
      echo "WARNING: Typed keys do not match."
      echo -n "Try again? {Y|N} : "
      read reply
      if [ "$reply" != "Y" -a "$reply" != "y" ] ; then
         exit 1;


# Alter the DB to enable encryption.
sqlplus "/ as sysdba" << EOF
alter system set encryption key identified by "$walletpw";

if [ $? -ne 0 ] ; then
   echo "WARNING: Enabling encryption may have failed in the DB."
   exit 1;

# Change permissions on all files in the wallet dir.
chmod 600 "$wallet_dir/*"

# Offline and then drop the tablespace and datafiles.
# WARNING: You should have a backup at this point.
echo "WARNING: About to offline and drop tablespaces:"
echo "$list_tblspace"
echo "##################################################"
echo "You should check the expdp export logs in $dp_dir."
echo "##################################################"

echo -n "Continue? [Y|N]: "
read reply
if [ "$reply" != "Y" -a "$reply" != "y" ]; then
   echo "Cancelling."
   exit 1

# Offline and drop each tablespace (including datafiles) then re-create and import export dump file.
echo "$list_tblspace" | while read tablespace
   sqlplus "/ as sysdba" << EOF
alter tablespace $tablespace offline;
drop tablespace $tablespace including contents and datafiles;
-- @$dp_dir/cre_$tablespace.sql <<-- Recreates original non-TDE tablespace.

   impdp userid="'/ as sysdba'" dumpfile=$tablespace.dmp directory=dpdump_backup logfile=${tablespace}_imp.log tablespaces="$tablespace"


echo "#######################################################"
echo "You should enable auto-open of the wallet if necessary."
echo "End of script."

Thursday, June 20, 2013

Oracle 11g Methods of Performance Tuning SQL

>90% of upgrade related problems are performance issues after an upgrade.

Source: Oracle Corp

Oracle tools for helping you tune the database:
  • Statspack - FREE - (See note 394937.1)

  • AWR - Diagnostics Pack & Tuning Pack license required.

  • Real Application Testing (Features: SQL Performance Analyser & Database Replay) - Tuning Pack license required.

Since 11g, Oracle recommend, instead of: storing outlines, fixing stats, using SQL hints, using the Rule Based Optimiser (desupported); you should use the SQL Plan Management tool along with SQL Profiling.

See spm_white_paper_ow07.pdf for more information.

Thursday, June 13, 2013

HowTo: Recover Oracle DB with ORA-01547 after RESETLOGS Failed

Scenario: You have restored a database from a backup but part way through recovery, you didn't have all the archive redo logs.
Your recovery is an automated script that then progressed on to OPEN RESTLOGS, which failed.
Now you have all the archive logs and you want to re-run recovery, but you get ORA-01547 plus ORA-01194.

If you are certain that you now have all the archive logs (maybe you forgot to copy all of them), you need to restart recovery:

SQL> shutdown abort;
SQL> startup mount exclusive;
SQL> recover automatic database using backup controlfile until cancel;

You now get the error:

ORA-00279: change 5996813573990 generated at 01/30/2013 18:38:25 needed for thread 1
ORA-00289: suggestion : /oradata/PROD/archive/PROD_806092582_1_1.arc
ORA-00280: change 5996813573990 for thread 1 is in sequence #1
ORA-00278: log file '/oradata/PROD/archive/PROD_806092582_1_1.arc' no longer needed for this recovery
ORA-00308: cannot open archived log '/oradata/PROD/archive/PROD_806092582_1_1.arc'
ORA-27037: unable to obtain file status
HPUX-ia64 Error: 2: No such file or directory
Additional information: 3

The database is looking for redo information generated during the previous recovery session.
During a recovery, redo information is generated because the database is replaying the undo records of transactions to be rolled back, but these records may need to be rolled back. So you get redo records.

Unfortunately, because your database is in a precarious state, it doesn't quite know where to look for the redo records.
So, all you need to do is tell it where the redo logs are located (your database probably won't have performed a log switch yet).

First, cancel the recovery:


For checking purposes, it's good to confirm that only SYSTEM tablespace is needing recovery by querying the current SCNs across all datafiles:

SQL> select distinct fhscn SCN from x$kcvfh;


Again, for checking purposes, you can see that one of the datafiles has a status of != 4. (4= consistent):

SQL> select distinct fhsta STATUS from x$kcvfh;


You can see that this is the SYSTEM datafile (filenumber =1):

SQL> select hxfil FILENUMBER, fhsta STATUS, fhscn SCN, fhrba_seq SEQUENCE from x$kcvfh where fhsta = 8196;

---------- ---------- ---------------- ----------
1 8196 5996813573990 1

To be able to provide the first redo log to the recovery process, you should query your log members.
I have found that usually, the recovery only needs the very first log member listed when you run the following query (I guess this is the "active" log member at the time the RESETLOGS was called):

SQL> select member from v$logfile;
/oradata/PROD/log_a/log4a_PROD.log            <<-- This one.

8 rows selected.

Now you can call the recovery process again.
This time, when you are prompted for the archive log name, enter the redo log member name & path you got from the previous SQL:

SQL> recover automatic database using backup controlfile until cancel;
ORA-00279: change 5996813573990 generated at 01/30/2013 18:38:25 needed for thread 1
ORA-00289: suggestion : /oradata/PROD/archive/PROD_806092582_1_1.arc
ORA-00280: change 5996813573990 for thread 1 is in sequence #1
ORA-00278: log file '/oradata/PROD/archive/PROD_806092582_1_1.arc' no longer needed for this recovery
ORA-00308: cannot open archived log '/oradata/PROD/archive/PROD_806092582_1_1.arc'
ORA-27037: unable to obtain file status
HPUX-ia64 Error: 2: No such file or directory
Additional information: 3

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}


Log applied.
Media recovery complete.

Now open your database:

SQL> alter database open resetlogs;

Database altered.

SQL> exit

You should now run an immediate FULL backup using your preferred method.

Thursday, June 06, 2013

Oracle Doc: EBS R12 Upgrade/Patch Forms and Reports

There's a nice Oracle doc on how to upgrade the Oracle AS 10g Forms & Reports component of an Oracle EBS R12 instance:
Upgrading OracleAS 10g Forms and Reports in Oracle E-Business Suite Release 12 [ID 437878.1]

Once you've read it, you'll realise what a mammoth task it is.
It's interesting that out-of-the-box, the Forms & Reports upgrades don't work with EBS, you need the interoperability patches to be installed ontop.

Thursday, May 30, 2013

SAP PI Integration Builder Unable to Launch and Verify Fail

When starting the Integration Builder or any other Java based application for SAP PI, you get a Java error displayed on your PC for the aii_bdir_client.jar stating that it was “Unable to launch the application” or verify the Java signature.

This problem is caused by the expiration of the self-signed Java certificates that certify the server where the Java classes are deployed from for the Java Web Start functionality.

To fix the issue:

- From the main SAP PI Web Screen (http://<server>:50##00/rep), select “Administration” in the top right.
- Log in with your PISUPER user.
- Then on the left-hand side, select “Java Web Start Administration”.
- On the right click the “Re-initialization and force signing” button.

- Select the “Directory” tab on the left-hand side.
- Then on the left-hand side, select “Java Web Start Administration”.
- On the right click the “Re-initialization and force signing” button.

- Close Internet Explorer.

When you next try accessing the Integration Builder (or other Java Apps), you will be presented with a holding screen whilst the Java server collects and re-signs all the required Java classes.

It usually takes around 5-10 minutes to complete.
After completion, the Java apps should work as before.

Monday, May 27, 2013

HowTo: Find EBS R12 Forms and Reports Version

You can find the version of Forms & Reports installed in your EBS R12 system by checking the file under the Oracle AS home of the Forms & Reports component:

$ grep Version /u01/oracle/<instance>/apps/tech_st/10.1.2/config/

Saturday, May 25, 2013

SQL_OPCODE values from an 11gR2 Database

If you have a need to find the actual numeric operation code for different SQL operations, then this is possible if you can access the ASH table as highlighted below.

The snapshot was taken from an 11gR2 database:

SQL> select distinct sql_opname,
       from dba_hist_active_sess_history;

SQL_OPNAME                SQL_OPCODE
------------------------- -----------------
DELETE                    7
ALTER DATABASE            35
ANALYZE TABLE             62
ALTER TABLE               15
INSERT                    2
UPSERT                    189
ALTER SUMMARY             172
CREATE TABLE              1
SELECT                    3
LOCK TABLE                26
CREATE INDEX              9
PL/SQL EXECUTE            47
TRUNCATE TABLE            85
CALL METHOD               170
UPDATE                    6

I guess the newer the "feature" the higher the number, since Oracle couldn't just go and change the number between releases.

Thursday, May 16, 2013


Scenario: You want to tnsping an Oracle net service name, but your sqlnet.ora has a DEFAULT_DOMAIN configured.
This means tnsping automatically adds your DEFAULT_DOMAIN onto the end of the service name you want to tnsping.

> tnsping mynetservice

TNS Ping Utility for HPUX: Version - Production on 18-APR-2013 18:05:03
Copyright (c) 1997, 2011, Oracle. All rights reserved.

Used parameter files:

TNS-03505: Failed to resolve name

In the sqlnet.ora, you have something like:


All you need to do to ping a net service name without the DEFAULT_DOMAIN, is to append a ‘.’ to the net service name like so:

> tnsping mynetservice.     <<- Note the dot.

TNS Ping Utility for HPUX: Version - Production on 18-APR-2013 18:05:03
Copyright (c) 1997, 2011, Oracle. All rights reserved.

Used parameter files:

Used TNSNAMES adapter to resolve the alias

Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = myhost)(PORT = 1521))) (CONNECT_DATA = (SERVER=DEDICATED) (SERVICE_NAME = mynetservice)))

OK (140 msec)

Basta cosi

Thursday, May 09, 2013

APP-FND-01630 DBC File Oracle EBS R12

If you receive error APP-FND-01630 "Cannot open file xxxx" in a popup window whilst working on an Oracle R12 E-Business Suite instance:

If you get the the administrator to add read permissions for everyone, for the DBC file under $FND_SECURE.
As the applmgr user, change to the $FND_SECURE directory and add permissions to the dbc file:

chmod g+r,o+r <file>.dbc

No restart of EBS is required.
This should fix the issue.

Thursday, May 02, 2013

HowTo: Know if you're using pure RedHat, CentOS or Oracle Enterprise Linux

Scenario: You been given access to a server and you don’t know if it’s a pure RedHat, CentOS or Oracle Enterprise Linux server.

We all know how to query the version of the Linux OS using the following :

> uname -a

> cat /etc/redhat-release

But these don’t necessarily differentiate between a pure RedHat O/S and the downstream Linux distributions such as CentOS or Oracle Enterprise Linux.

As well as the above, you can query what’s been installed through RPM:

> rpm -qa | grep oraclelinux

> rpm -qa | grep centos

Any output in one or the other will confirm an OEL or CentOS distribution.

Thursday, April 25, 2013

SAP Command Field Codes

Extracted from SAP Note 26171 – “Possible entry values for command field ("OK-code")”

Calling transactions & Session Handling:

    This terminates the current transaction, and call transaction "xxxx", for example, "sa38" for reporting.

    This terminates the transaction. This generally corresponds to going back with F15.

    This terminates all separate sessions and logs off (corresponds to "System -> Log off").

    This terminates all separate sessions and logs off immediately (without any warning).

    This opens a new session and starts transaction xxxx in this session.

    This lists existing sessions and allows deletion or opening of a new session.

    This terminates the current session (corresponds to "System -> End session").

/i1 , /i2 ,...
    This terminates the session with the specified number.

    "Fast path": "xyzw" refers to the underlined letters in the menus. This type of navigation is uncommon and is provided more for emergencies (such as a defective mouse)

Background Input:

    This terminates the current background input transaction and characterizes it as "Failed".

    This deletes the current background input transaction.

    This terminates background input processing and sets the session to "Failed".

    This switches from "Display errors only" to "Process in foreground".

    This switches from "Process in foreground" to "Display errors only".

ABAP/4 debugging:

    This switches into debugging mode.

    This switches into debugging mode and activates the debugging of system functions.

Buffer commands:

    This resets all buffers of the application server.

    This resets the CUA buffers of the application server.

    This resets the table buffers of the application server.

    This resets the nametab buffers of the application server.

    This resets the screen buffers of the application server.

Out-of-date commands:

    From the menu system:
Calls transaction Pxxx.

If an ABAP list is displayed:
Send command Pxxx to the list processor (for example, P+, P--, PRI, and so on)
From a transaction screen:
Send command Pxxx to this transaction

xyyy (where x is not 'P', '/', '=' or '.') )
    From the menu system:

Calls transaction xyyy
Within a transaction:
Send command xyyy to this transaction

   Calls transaction xxxx and branches to the next screen. So, it has the same effect as /nxxxx followed by ENTER.

   This entry is still possible for compatibility reasons, but is no longer supported.

    Searches for a string in lists (like "System -> List -> Find String")

    Downloads lists

    Prints lists

    Displays the status dialog (such as "System -> Status...) )

Monday, April 22, 2013

Fuci Fuci insect Italia

Whilst in Puglia in the south of Italy, I've come across the locally named Fuci Fuci insect (in dialeto) which is Puliese dialect for "run run".
A quick search of the internet doesn't tell you what this insect is actually called, based on the dialect name, not surprising.
It's a member of the centipede family and is known as the House Centipede.  It is documented here:
It's harmless and is probably on the hunt for spiders!
Still, it creeps me out that something can run 0.5 meters in 1 second!

Thursday, April 18, 2013

SAP Index is in the Customer Namespace

Scenario: You have your own SAP customer namespace (e.g. /MYSPACE/) and you are trying to add a new custom index Z01 to one of your tables in SE11.

When you try and “check” the index or activate it in SE11, you keep getting the error “Index xxxx is in the customer namespace”:

This problem is caused by the fact that the index name is the three character part of the index name at the end, and does not include the table name.

As an example:

/MYSPACE/MYTABLE~Z01 means the index name is Z01.

Because your table is in your customer namespace /MYSPACE, you cannot use the Y or Z development namespaces in your index name.

Therefore, remove the Z or Y from the index name.
You should be able to create an index /MYSPACE/MYTABLE~C01 instead.

Thursday, April 11, 2013

HP MSL 2024 LTO Tape Library on Oracle Enterprise Linux 5.7 (RHEL 5.7)

Whilst working on a HP ProLiant ML370 G6 server running Linux kernel 2.6.18 (Oracle Enterprise Linux 5.7), I was trying to get the O/S to see an HP MSL 2024 LTO-4 tape drive and library.

After a reboot, there was still no sign of the “st” device in /dev:

> ls -l /dev/st[0-9]*

And no SCSI device listed other than the DVDROM:

> cat /proc/scsi/scsi

Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: hp Model: DVDROM DH40N Rev: IS01
Type: CD-ROM ANSI SCSI revision: 05

I found the following “page” very helpful:

It mentioned the HP Smart Array driver, also known as cciss.
I checked I had one of these puppies:

> lspci |grep Array
05:00.0 RAID bus controller: Hewlett-Packard Company Smart Array G6 controllers (rev 01)
1b:00.0 RAID bus controller: Hewlett-Packard Company Smart Array G6 controllers (rev 01)

Yep, I had two.
I checked the Kernel module for cciss was loaded:

> lsmod | grep cciss
cciss 125033 3
scsi_mod 199129 12 be2iscsi,ib_iser,iscsi_tcp,bnx2i,libcxgbi,libiscsi2,scsi_transport_iscsi2,sr_mod,sg,libata,cciss,sd_mod

Yep, it was loaded.
Also notice that the scsi_mod has 12 modules referenced.

So my problem was probably like the “page” I pointed out suggested:

Additionally, note that the driver will not engage the SCSI core at init
time. The driver must be directed to dynamically engage the SCSI core via
the /proc filesystem entry which the "block" side of the driver creates as
/proc/driver/cciss/cciss* at runtime. This is because at driver init time,
the SCSI core may not yet be initialized (because the driver is a block
driver) and attempting to register it with the SCSI core in such a case
would cause a hang. This is best done via an initialization script
(typically in /etc/init.d, but could vary depending on distribution).

Note the words “dynamically engage”.
They are not written like that for fun. Guess what was required? Yep, I need to engage the SCSI core:

> ls -l /proc/driver/cciss
-rw-r--r-- 1 root root 0 Mar 15 14:16 cciss0
-rw-r--r-- 1 root root 0 Mar 15 14:16 cciss1

> echo "engage scsi" > /proc/driver/cciss/cciss0
> echo "engage scsi" > /proc/driver/cciss/cciss1

Now load the “st” tape module into the Kernel:

> modinfo st
filename: /lib/modules/2.6.18-
alias: char-major-9-*
license: GPL
description: SCSI tape (st) driver
author: Kai Makisara
srcversion: AA839FAA66A7758BC7A5C9D
depends: scsi_mod
vermagic: 2.6.18- SMP mod_unload gcc-4.1
parm: buffer_kbs:Default driver buffer size for fixed block mode (KB; 32) (int)
parm: max_sg_segs:Maximum number of scatter/gather segments to use (256) (int)
parm: try_direct_io:Try direct I/O between user buffer and tape drive (1) (int)
parm: try_rdio:Try direct read i/o when possible (int)
parm: try_wdio:Try direct write i/o when possible (int)
module_sig: 883f3504e2dbc6ac74ff6cf7d76a7e6112d2d09e302b4a8f57e42688cd5258668b7a3044163799609e32cd3dacb7e842b9a84ef2d2032f542f69e866

> insmod /lib/modules/2.6.18-

Check that it’s loaded:

> lsmod | grep st
st 72805 0
scsi_mod 199129 13 st,be2iscsi,ib_iser,iscsi_tcp,bnx2i,libcxgbi,libiscsi2,scsi_transport_iscsi2,sr_mod,sg,libata,cciss,sd_mod

You will see on the above output that the scsi_mod now has accessed the “st” module (in the list) and shows 13 (yours maybe different) modules.

Check we have a SCSI device:

> cat /proc/scsi/scsi
Attached devices:
Host: scsi1 Channel: 00 Id: 00 Lun: 00
Vendor: hp Model: DVDROM DH40N Rev: IS01
Type: CD-ROM ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: HP Model: Ultrium 4-SCSI Rev: U55W
Type: Sequential-Access ANSI SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 01
Vendor: HP Model: MSL G3 Series Rev: 5.50
Type: Medium Changer ANSI SCSI revision: 05


We can confirm what DMESG sees:

> dmesg | tail
scsi2 : cciss
scsi3 : cciss
Vendor: HP Model: Ultrium 4-SCSI Rev: U55W
Type: Sequential-Access ANSI SCSI revision: 05
st 3:0:0:0: Attached scsi tape st0
st0: try direct i/o: yes (alignment 512 B)
st 3:0:0:0: Attached scsi generic sg1 type 1
Vendor: HP Model: MSL G3 Series Rev: 5.50
Type: Medium Changer ANSI SCSI revision: 05
scsi 3:0:0:1: Attached scsi generic sg2 type 8
st0: Block limits 1 - 16777215 bytes.

And there at the end is the confirmation that we should now have a /dev/st0 device:

> ls -l /dev/st0
crw-rw---- 1 root disk 9, 0 Mar 15 12:50 /dev/st0

With a tape confirmed as in the tape drive (through the web based HP MSL tape library web GUI), we can do a basic backup using TAR:

> echo "Hello Darryl" > /tmp/dmg.txt
> tar -cvf /dev/st0 /tmp/dmg.txt

Or to not rewind the tape after, we could use nst0:
> tar -cvf /dev/nst0 /tmp/dmg.txt

Then read the file back from the tape device:

> rm /tmp/dmg.txt
> tar -tvf /dev/st0
-rw-r--r-- root/root 13 2013-03-15 14:54:20 tmp/dmg.txt

> tar -xvf /dev/st0 tmp/dmg.txt

Thursday, April 04, 2013

SAP PI 7.0 JDBC Connectivity Issues

The following SAP Notes contain useful information about fixes in the SAP PI Adapter Framework Core (SAPXIAFC) and Adapter Framework (SAPXIAF) components.

The notes were found whilst searching for component: BC-XI-CON-JDB (JDBC Adapter).

SAP Note 1483974 - File and JDBC sender adapter's retry not working after error
SAP Note 1510659 - Improvement in Locking, Logging and Monitoring in JDBC
SAP Note 1083488 - XI FTP/JDBC sender channel stop polling indefinitely(04/04S)
SAP Note 1398891 - XI/PI Adapter for Oracle: Overcoming DB Connection Issues

I was seeing a constant problem whereby the JDBC Adapter would not restart after the destination database system went down for backup or had a failure, or was not started at the time that the SAP PI system was started up.

The SAP note 1483974 was interesting as it highlighted the use of the Scheduler, which I didn’t know existed.

Thursday, March 28, 2013

HowTo: Script eMail With Attachment Direct via SMTP Server

Have you ever wondered how you can directly send an email with a file attachment, direct from a shell script via an SMTP server?
That is, without needing to rely on any mail or sendmail setup on the UNIX/Linux server.
Well, it's simple.

Create a script that encodes the file you wish to attach, into BASE64, then type the following to send the file:

telnet 25<<EOF
mail from:
rcpt to:
Subject: Test attachment

Mime-Version: 1.0
Content-Type: multipart/mixed; boundary=attachmentbound

The email body

Content-Type: application/octet-stream; name="attachmentfile"
Content-Disposition: attachment\; filename="thefilename.txt"
Content-Transfer-Encoding: base64

VGVzdCBBdHRhY2htZW50     <<-- Insert your BASE64 content here!


You need to have the content already encoded in BASE64.
For this you could use a Perl script using the MIME::Base64 module as shown below:

use MIME::Base64;
#printf ("%s", encode_base64(eval "\"$ARGV[0]\""));

## Check to see if each and every file exists ##
foreach $file (@ARGV)
if (not -e $file)
print "File \"$file\" does not exist\n";
exit( 1 );

# Convert file to BASE64
open( OUTFILE, ">$file.out") or die "Could not open outputfile: $!\n";;
open( FILE, $file ) or die "Could not open inputfile: $!\n";
local($/) = undef;
print OUTFILE encode_base64(<FILE>);
close( FILE );
close( OUTFILE );

Thursday, March 21, 2013

NetBackup Oracle Database bplist output columns

Scenario: When backing up an Oracle database using NetBackup and RMAN, the backup details can be listed using the NetBackup command bplist.

You wonder what the columns of output data represent.

Looking in the /usr/openv/netbackup/logs/dbclient directory in the log file created during the backup, and with “VERBOSE = 5” in the /usr/openv/netbackup/bp.conf file, I can see that the following data is stored for the backup:

14:17:22.370 [5210] <4> VxBSAGetEnv: INF - returning -
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectOwner.bsa_ObjectOwner:
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectOwner.app_ObjectOwner:
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectName.objectSpaceName: Oracle Database
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectName.pathName: /bk_986_1_806854642
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - createTime: 0
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - copyType: 3
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - copyid: 0.0
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - restoreOrder: 0.0
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - estimatedSize: 0.100
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - resourceType: Oracle Backup
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectType: 4
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectStatus: 0
14:17:22.370 [5210] <4> bsa_printObjectDescriptor: INF - objectDescription:
14:17:22.370 [5210] <4> VxBSACreateObject: objectOwner: -default_user-
14:17:22.371 [5210] <4> VxBSACreateObject: objectGroup: dba 1
14:17:22.371 [5210] <4> VxBSAGetEnv: INF - entering GetEnv - NBBSA_IMAGE_PERMISSION
14:17:22.371 [5210] <4> VxBSAGetEnv: INF - returning -

Therefore, the bplist outputs the following:

> /usr/openv/netbackup/bin/bplist -C <clientname> -t 4 -k <policyname> -Listpolicy -R /

9175040 0 0 0 9175040 1 0 -1 -3 33200 orasa1 dba 9175040 1360333082 1360333042 1360333042 1360333042 2 18 POLICY_TST 30 /bk_986_1_806854642

These relate to:

9175040    : The size (bytes) of the TAR of the backup piece.
9175040    : The size (bytes) of the TAR of the backup piece.
orasid     : The UNIX Oracle username used to run RMAN.
dba        : The UNIX Oracle group name used to run RMAN.
9175040    : The size (bytes) of the TAR of the backup piece.
1360333082 : Epoch timestamp, possibly showing created, modified, accessed?
1360333042 : Epoch timestamp, possibly showing created, modified, accessed?
1360333042 : Epoch timestamp, possibly showing created, modified, accessed?
1360333042 : Epoch timestamp, possibly showing created, modified, accessed?
POLICY_TST : The name of the NetBackup policy used.
/bk_986_1_806854642 : The RMAN backup piece file name.

I don't know what some of the columns are for.  Maybe someone else knows more?