Using Google Chrome?

Download my
free Chrome Extension, Power Notes Searcher, to make searching for and evaluating SAP notes, much easier.

Recent Posts

Thursday, September 26, 2013

SAP Kernel librfcum.so Missing

When trying to start a SAP system I got an error from the sapstart.log indicating that librfcum.so was missing.
This was not in any of the exe directories or in any of the Kernel distribution files.

During an upgrade, a kernel was patched with a unicode kernel when it should have been a non-unicode kernel.
The correct kernel patch was then deployed into the central exe directory, but it looks like sapcpe did not correctly detect and replace the kernel files on the other instances.

The solution to the missing librfcum.so problem, was to completely remove the kernel files in the instance exe directories, then manually run sapcpe (sapcpe pf=<instance pf>) to re-copy the files from the central exe directory.

This fixed the issue.

Monday, September 23, 2013

Find RMAN Backup Statistics

You can query the view V$BACKUP_SYNC_IO (for synchronous tape devices) to obtain the average MB transfer speed from RMAN to the tape device (or intermediary software if using the obk interface):

SQL> select avg(EFFECTIVE_BYTES_PER_SECOND)/1024/1024 MB_per_s
       from V$BACKUP_SYNC_IO
      where DEVICE_TYPE='SBT_TAPE';

MB_PER_S
-----------
16.1589822


Thursday, September 19, 2013

SAP Unicode Conversion Nametab Inconsistency

During a unicode conversion of a SAP NW731 system, I saw a problem where a number of BI FACT tables (/BIC/E*) were present in the SAP nametab, existed in the Oracle database, but they didn't exist in the DDIC (SAP data dictionary visible in SE14).

SPUMG nametab

I asked the BI administrator to confirm that these tables were not referenced in the BI cubes, and they weren't.  He suggested that these tables used to belong to a cube that was long since deleted.  This means that at some point there must have been a program bug that has left the nametab inconsistent with the DDIC.
There are no SAP notes about what to do in a situation like this, but there are two options:
1, Exclude the tables from the unicode conversion in transaction SPUMG by adjusting the exceptions list.
or
2, Manually adjust the SAP nametab.
I chose option 2, since this was the cleanest option and would hopefully leave the system in a better state for future updates.

I found SAP note 29159 contained some useful information on a similar subject.  The note suggested writing some simple ABAP code to delete these tables from the SAP nametab tables DDNTT and DDNTF.

Whilst this was simple enough, I decided that I didn't need to go as far as writing ABAP.  I manually removed the entries at the database level using SQL:

SQL> delete from sapsr3.ddntt where tabname ='<TABLE>';
SQL> delete from sapsr3.ddntf where tabname ='<TABLE>';


Then restarted the system (or you could sync the nametab buffer with "/n$NAM").
This fixed the issue and allowed the unicode conversion to continue.

UPDATE: I've since found that it's possible to list the contents of the Nametab buffer and delete specific tables from the buffer using the function modules DD_SHOW_NAMETAB and DD_NAMETAB_DELETE.

Thursday, September 12, 2013

SAP R3load table splitter - Table Analysis Performance

Be careful when using the R3load table splitter from the Software Provisioning Manager screens.
You are asked to supply your table split file, in the install guide, in the format "<TABLE>%<# SPLITS>".
However, this does not supply the table column to split by.

When splitting large tables, during the Table Splitting preparation phase (before you even start exports), R3ta can run for quite a while whilst is performs scans of the available INDEXES and then COUNTs the number of rows in the table(s) to be split.

It's trying to define the most apt column(s) to use for generating the WHR files which contain the query predicates.

I tried adding a specific column during the initial table splitter screens, where you can specify the column to use. However, this seems to be completely ignored.

The best advice, is to prepare your table split during your proof phase in the sandbox environment, then potentially manually adjust the final WHR file to account for any additional rows in the table(s) to be split.
This will save a lot of time and effort in the true production conversion run.

Also, ensure that the indexes on those tables, especially the ones that the WHR predicates refer to, are rebuilt if possible.