Using Google Chrome?

Download my
free Chrome Extension, Power Notes Searcher, to make searching for and evaluating SAP notes, much easier.

Recent Posts

Thursday, November 27, 2014

Netgear ReadyNAS Duo Samba Performance

If you access your Netgear ReadyNAS Duo via your Windows Explorer and you think it's a little slow when reading/writing, then you could get a little speed boost from adding one line to the Samba configuration.

You'll need access as root via SSH to log into the ReadyNAS.

Backup the current config file:

# cp -p /etc/samba/smb.conf  /etc/samba/smb.conf.bak

Then as root, edit the samba config file:

# vi /etc/samba/smb.conf

Add the new line under section "[global]":

socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=65536 SO_SNDBUF=65536

Save the file (ZZ).

Restart Samba:

# /etc/init.d/samba restart

Stopping Samba daemons: nmbd smbd.
Starting Samba daemons: nmbd smbd.

You should notice a performance improvement when transferring files.

Thursday, November 20, 2014

Secure ReadyNAS Duo (v1) ADMIN Share

If you have a ReadyNAS Duo and you're happy with your setup and are now sharing your shares out through your router over the internet, you need to be aware that any old hacker can try and access your ADMIN share (e.g. https://<your-readynas>/admin).

I use mine in exactly that way but don't want Mr A.Hacker trying out a myriad of passwords on my ADMIN share just because my public shares have "Netgear ReadyNAS" plastered all over the front page (a tip for another day I feel).

Instead, if you're comfortable using SSH, (there is a way to do this by using the FrontView config backup, edit the file and put back in place) then you can edit your Apache httpd.conf configuration file so that access to the ADMIN share is restricted to a host or hosts on your local home network only.

Steps:

1, Log into your readynas via SSH as root.
2, Backup your old config file:

# cp -p /etc/frontview/apache/httpd.conf  /etc/frontview/apache/httpd.conf.bak

3, Use 'vi' to edit the httpd.conf:

# vi /etc/frontview/apache/httpd.conf

4, Change the sections as follows:

<Location /admin>
DirectoryIndex index.html
Options ExecCGI
AuthType Basic
AuthName "Control Panel"
require user admin

# block external admin.
Order Deny,Allow
Deny from all
Allow from 192.168 <<< INSERT YOUR LOCAL NETWORK IP ADDRESS SUBNET HERE
</Location>

and

<Location /get_handler>
SetHandler perl-script
PerlHandler get_handler
PerlSendHeader On
Options ExecCGI

# Order allow,deny
# Allow from all
AuthType Basic
AuthName "Control Panel"
require user admin

# block external admin.
Order Deny,Allow
Deny from all
Allow from 192.168 <<< INSERT YOUR LOCAL NETWORK IP ADDRESS SUBNET HERE
</Location>

plus

<Location /dir_list>
AuthType Basic
AuthName "Control Panel"
require user admin
Options ExecCGI
#Allow from all

Order Deny,Allow
Deny from all
Allow from 192.168 <<<-- Insert your subnet here.
</Location>

5, Save the changes with:

<shift + 'ZZ'>

6, Restart your readynas:

# shutdown -r now

7, Test from your local network that you can access the ADMIN share:

https://<readynas IP>/admin

8, Test from the internet that you can't access the ADMIN share:

https://<ISP IP>/admin

You should see a HTTP 403 FORBIDDEN error.

That's it.
If you made an error, you can restore your config from the backup file you took:

# cp -p /etc/frontview/apache/httpd.conf.bak /etc/frontview/apache/httpd.conf

and then restart your readynas.
Don't forget to check the config after you make any changes to shares / firmware etc.

Thursday, November 13, 2014

SAP Kernel to EXT or not to EXT...

Scenario: You're at the point where you are installing a new system and your choice of Kernel is down to the EXT version or the non-EXT version.  Which version should you use?

The difference between the EXT version of a Kernel and the non-EXT version of a Kernel, is simply down to the version of the compiler and compilation Operating System used by SAP to compile the Kernel binaries.

As an example, the 7.21 kernel could be compiled on Windows 2003 Server, using the Visual Studio 2010 compiler.
The 7.20EXT kernel could be compiled on Windows 2003 Server, using the Visual Studio 2012 compiler.

The difference is all about the compilation environment, and nothing to do with functionality.  Or is it...
If you look at SAP note 1926209 - "Questions on subjects of 7.20/7.21 EXT kernel and C/C++ runtime", this would seem to be the case.
However, read SAP notes 1756204 - "Activate fast polling"  and 1728283 - "SAP Kernel 721: General Information" , you will see that it seems to suggest that SAP can and will change the functionality between an EXT and non-EXT kernel (7.21 is used as the example here).
So, be wary and always read up about the benefits of each Kernel, whether EXT or not.

Thursday, November 06, 2014

IBM DB2 10.1 Statistics Replication

Scenario: You would like to test a new index in an IBM DB2 10.1 database.  Unfortunately your development system doesn't have anywhere near the same number of records in the database table on which you are creating the new index.

Like many other RDBMSs, DB2 uses table and column statistics to influence the optimiser's decision as to which access path to choose at execution time.
By replicating only the statistics, it's possible to fool the optimiser into thinking the table has more records present, than it really does.  This means that it's likely to choose a different access path.

When performance tuning a database, it's useful to use this method of fooling the optimiser, because you can emulate a larger table in a small development system with little actual data.

The process in DB2 is like this:
- Generate (or export) the statistics for a table in a production database system (PRD) schema DBA.
- Modify the export file.
- Upload the contents of the export file into a development database system (DEV) schema DBB.
- Test.

Step 1 - Export the statistics for a table in production.

Connect into the production database (DBA), then use the db2look command to create an export file containing the UPDATE commands for adjusting the statistics:

db2prd> db2 connect to PRD

db2prd> db2look -d PRD -e -c -m -r -t DBA.TABLE1 -o table1_STATS.sql

The output will be written to the table1_STATS.sql file in the current directory.

Step 2 - Modify the export file.
You should edit the output file to remove all lines before the line “-- Mimic table TABLE1”, failure to do this could mean dropping the TABLE1 table in development.

You must also edit the file and replace all instances of schema “DBA” with “DBB” to ensure that the correct development database schema is found.
The modified file will look like:

-- Mimic table TABLE1

UPDATE SYSSTAT.TABLES
SET CARD=2341434,
NPAGES=14636,
FPAGES=14645,
OVERFLOW=9473,
ACTIVE_BLOCKS=0
WHERE TABNAME = 'TABLE1' AND TABSCHEMA = 'DBB';

UPDATE SYSSTAT.COLUMNS
SET COLCARD=1,
NUMNULLS=0,
...

Step 3 - Upload the statistics into the development database.

db2dev> db2 connect to DEV
db2dev> db2 -tf ikpf_STATS.sql
...
DB20000I The SQL command completed successfully.

DB20000I The SQL command completed successfully.

SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a query is an empty table. SQLSTATE=02000

SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a query is an empty table. SQLSTATE=02000

You're now ready to test.

To reset (re-gather) the statistics on the development database table, you simply need to re-collect statistics using the RUNSTATS command: "db2 RUNSTATS ON TABLE TABLE1 WITH DISTRIBUTION AND INDEXES ALL".



Thursday, October 30, 2014

Finding Your SAP F&R Version

The Forecasting & Replenishment offering from SAP runs on SAP SCM.
SAP originally bought the F&R binary calculation engine from a Swiss company called SAF.  This was integrated to the SCM platform and is called through an RFC connection.

If you're planning an upgrade you need to easily identify which version you're running.

There are two areas to check:
- The SAP SCM version.
- The SAF (FRP) binary version.

Check in SPAM (or in SAP GUI, System -> Status -> Component Version) for the SAP SCM Server version.

Note that:
SCM 7.02 (EHP 2) = F&R 5.2.
SCM 7.01 (EHP 1) = F&R 5.1.

Checking the SAF binary must be done at the operating system level.
The usual location is either “/usr/sap/<SID>/SYS/global/frp/bin" or "/usr/sap/<SID>/FRP/bin".

As the <sid>adm user simply call the "safcnfg" binary with the "-version" command line option:

SAP F&R SAF binary version  

See SAP note 1487615 for details on where to find FRP binary patches.

Finally, you should note that SAP SCM is itself a Business Suite software package like SAP ERP.  Therefore, it is not classed as a HUB or Sidecar landscape pattern, but instead, a source business system.  This means that there is no real dependency link to the SAP ERP version (providing you're on the same technology platform level e.g. 7.31). 
You do need to consider that the interface from ERP to F&R may need some notes applying, some of which may be better implemented through an SPS upgrade instead of notes upon notes. 

See application component SCM-FRE-ERP.

Thursday, October 23, 2014

SAP FRP Data Environment Locations

The locations on disk where the SAP FRP data environments are stored, are administered through transaction /FRE/FRP_ADMIN.

However, if you don't have authorisation for this transaction, you can also check the table "/fre/frpmid_srv" (a client dependent table).

Friday, October 17, 2014

SAP NW 7.31 Java - HTTP Tracing

Scenario:  You would like to trace a HTTP connection to a Netweaver 7.31 Java stack.

Prior to NW 7.31 you would have enabled tracing in either the old Java Administrative Tool, or newer Netweaver Administrator by setting the "enable" flag for tracing parameters of the HTTPProvider dispatcher service (see my previous blog article here).

Since Netweaver 7.31, the Java stack now includes an ICM (Internet Communication Manager) which is the main access point to which HTTP communications are routed to the Java server nodes.
Therefore, tracing in NW 7.31 Java is actually more like the ABAP stack tracing, because you simply increase the trace level in the ICM and then check the dev_icm trace file.

So the next question is, how do you access the ICM to increase the trace level in a Java stack?
Well this can be performed with the help of the SAPMC (only one "M" and not to be confused with SAP MMC for Windows).
The SAPMC was introduced in later NW 7.0 patches and is part of the sapstartsrv service.
The sapstartsrv actually listens on TCP port 5<##>13.  So if you have access through your firewall, navigate to http://<app server>:5<##>13   and the the Java based applet will start up (note that you'll need Java 1.4 or above on your PC).

Once loaded, you can see the SAP MC and expand the "AS Java" branch to locate the "Process Table" item:

SAP MC management console

From the "Process Table" you should be able to right click the ICM (on the right) and increment the trace level:

SAP MC increase ICM trace level

The trace level is dynamically increased.
To see the trace file, right click again and select "Show Developer Trace":

SAP MC ICM display trace

You can always see the current trace level by right clicking the ICM and selecting "Parameters...", or actually right clicking the ICM node further up the tree and selecting "Parameters...":

SAP MC show ICM parameters

The current trace level is shown as the "rdisp/TRACE" parameter value on the Dispatcher tab:

SAP MC rdisp/TRACE ICM

Again, right clicking the actual ICM node in the tree gives you direct access to the ICM management web page:

MWSnap106 2014-07-23, 09_39_44

From the ICM management page, you can also see the current trace level:

ICM Manager Monitor

Good luck with your tracing.

Sunday, October 05, 2014

Netgear ReadyNAS & DynDNS Replacement

With the demise of free DNS services from companies such as DynDNS, I was left with no free method of reliably accessing my Netgear ReadyNAS via the internet.

Before these free services closed down, my ReadyNAS was configured to automatically update it's IP address to the DynDNS service website, which allowed me to use a DynDNS provided host address (e.g. mydevice.dyndns.org) to access the ReadyNAS no matter what it's IP address was changed to by my home ISP provider.

When the free service eventually stopped, I decided that I didn't want to pay for such a small service offering, so I wrote a small script to perform the same sort of service.
The script simply checks my current ISP assigned IP address and compares it to the previous run of the script.  If the IP address has changed between checked, the script sends an email to my email account, which I can then use to access the device by directly entering the IP address into my web browser.

It's not neat, but it works.
Here's how I did it.

What you'll need:
- Root access to your ReadyNAS (you need the SSH addon installed).
- Configure email alerting on your ReadyNAS (do this in FrontView by giving it access to your email provider e.g. smtp.gmail.com).

Step 1 - Create the script.

As the root user connected to your ReadyNAS via SSH, use vi to create a small script in the home directory of the root user:

# cd $HOME
# vi check_ip_address

#!/bin/bash
#--------------------------------------------------------
#  Get IP address and mail it to us.
#  Place this script in $HOME/root
#  Call it from crontab.
#
#-------------------------------------------------------- user="ReadyNAS"
email_from=$(grep '^from' /etc/msmtprc | cut -f2 -d' ')
email_addr=`grep email /etc/frontview/alert.conf | awk -F '!!' '{ print $2 }' | awk -F ',' '{ print $1 " " $2 " " $3 }'`

subject="IP Address Changed" newfile=/tmp/myip.html
oldfile=/tmp/myip.old

# Call a generic site to get our IP.
wget --no-cookies --no-cache --user-agent="Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2049.0 Safari/537.36" --no-verbose --output-document=/tmp/myip.html http://www.askapache.com/online-tools/whoami/

# Read output.
myipnew=`awk -F "tt>" '{ if($2 ~/\./){ sub(/<\//,"",$2); print $2} }' $newfile`
myipold=`cat $oldfile 2>&1`
if [ "$myipnew" != "$myipold" ]; then
  # IP is different to last, so save it and mail it.
  echo "$myipnew" > $oldfile
  mesg="New IP Address: $myipnew"   headers="From: $email_from\nTo: $email_addr\n"
  full_content="${headers}Subject: ${subject}\n${content}\n\n$mesg"
  echo -e "$full_content" | /usr/sbin/sendmail -B8BITMIME -f $email_from $email_addr &> /dev/null
  echo "RC: $?"
fi

Set permissions on the script:

# chmod 750 check_ip_address

Step 2 - Schedule the Script.

Add the script to a cron schedule:

# export EDITOR=vi
# crontab -e
30 09 * * * /root/check_ip_address
30 14 * * * /root/check_ip_address

The above schedules the script to run at 09:30am every day, and again at 14:30.
That's it!

Saturday, August 02, 2014

Power Notes Searcher Updated to v1.1

The Chromium project has recently (May 2014) fixed a bug in the Chrome web browser which means that users of my Power Notes Searcher Google Chrome extension may have seen an issue with the table ordering in the SAP Notes history table.

I have now made a slight correction to the extension and v1.1 is now available for update/install in the Google Chrome Extensions Web Store (or from the link on my initial blog post here).

If you haven't already installed my extension, give it a go.  You don't know what you're missing!

Monday, July 07, 2014

Blog Readership, Almost 7,000

Since December 2013, my blog readership has really taken off.
It's nearly at 7,000 page views per month:

image

Which is the most popular post of all time, well it's still my multi-post guide to basic performance tuning a SAP system (published in 2011), but very closely followed by my recent HANA installation into a VM post.
You will also notice that a lot of readers are looking for the FICO authorisation objects F_REGU_BUK and F_REGU_KOA:

image

We can see that the US is very dominant in the stats, secondly India and then Germany.  The UK is in 4th place:

image

It's a close call between IE and Chrome in the browsers used.
Good job I've written my free extension for searching SAP notes, in Chrome!

image

Thanks for reading everyone.
I hope I've provided you with the solutions you've sought.

Darryl

Thursday, June 26, 2014

HANA DB Re-initialisation Without Reinstall

Scenario:  You have a small test system or PoC system and you want to revert or recreate the HANA DB like you've just opened the box and installed it from new, but without the hassle of the reinstall.

This is completely possible using the hdbnsutil command line program.
With HANA shutdown, connect to the HANA server via SSH as the <sid>adm Linux user, then run the hdbnsutil command line program as follows:

hana01:/usr/sap/H10/HDB10/exe> hdbnsutil -initTopology

checking for inactive nameserver ...
nameserver hana01:31001 not responding.
creating persistence ...
run as transaction master writing initial topology...
writing initial license: status check = 2
done

As you will see, it recreates the persistence layer (database) and also re-creates the license.
You will need to reinstall your HANA system license after the re-initialisation process is complete.
On a slow system, the process took approximately 2 minutes.

Due to the size of my system, I am unable to tell you if this process destroys any specific configuration.  From what I can tell, the existing global.ini, nameserver.ini and indexserver.ini are kept.

You should also note that the SYSTEM user password is reset to its default value of "master".
Plus, if you have enabled encryption, the reinitialised data volumes will be re-encrypted unless you de-check the checkbox on the Data Volume Encryption tab inside the Security tab, prior to reinitialisation.

Encrypting HANA Data Volumes

Out of the box, the HANA database is not encrypted.
For those businesses that demand encryption, from within HANA Studio you can activate the encryption of the HANA data volumes.
The HANA documentation supplied with SPS7, suggests that the recommended approach is to encrypt the data volumes as part of the HANA installation process.  This is due to the "Copy-on-write" method used to persist data on disk.  An already used database may still have unencrypted data pages in it, even after encryption is enabled.

You should note that the words "data volumes" means the location of the database files for the indexserver and the statistics server, which is usually something like "/hana/data/<SID>/mnt0000<n>/hdb0000<n>/*.dat".

Tip: You can check which values for "<n>" in the above, will be used, by checking the "Landscape Overview" tab and the "Volumes" tab within that.

Prerequisites:
- SAP recommend changing the SSFS encryption key after installation of the HANA system.  This key is used to store the data volume encryption key and others.
- Disk space of the data volume areas (we discuss this below anyway).
- Take a cold backup of the HANA DB area and config files.
- Access to the HANA Studio (you can do this with SQL access and hdbsql, but I don't show this).

Let's begin:

From within HANA Studio, open the "Security" tab and navigate to the "Data Volume Encryption" tab.
You will notice that encryption is not already enabled.
Tick the "Activate encryption of data volumes" tick box, then click the "Execute" button:

HANA volume encryption activation

HANA data volume encryption execute

The encryption process will start immediately ("Encryption Pending" then "Encryption Running"):

HANA volume encryption pending

The status is updated when each server process finishes:

HANA volume encryption running

Once the encryption process is completed successfully, the tick box is enabled again for you to be able to de-encrypt the volumes should you wish to reverse the process:

HANA volume encryption encrypted

My small (empty) HANA DB @ sps7, took approximately 10 minutes to encrypt.
The data volume sizes increased as part of the encryption process.
The indexserver data volume went from 324MB used to 341MB used:

HANA volume encryption indexserver size

HANA volume encryption indexserver size

Of more importance is the dramatic increase in the allocated (Total Size).
It's gone from 513MB to 896MB!  You will need to be aware of the disk space you may need before enabling encryption.

The statistics server is smaller and went from 80MB used to 101MB used:

Snap668 2014-06-26, 11_26_17

image

Again, notice that we've got an increased allocation from 320MB to 400MB.

Validate the encryption status using an SQL console:

SELECT * FROM M_PERSISTENCE_ENCRYPTION_STATUS;

image


What are the implications for encryption?

Well, the data volumes have been encrypted, but we've not seen anything about encrypting the logs or the backups.
Also, to preserve the performance of the HANA system in memory, the data is decrypted when read from the disk into memory.

Is encrypting the HANA DB transaction logs feasible?  Probably not.  There are various whitepapers detailing the issues of encryption of data stored on SSDs.  Since the majority of high performance appliance resellers use SSDs for the HANA DB transaction logs, the use of software layer encryption on SSDs is not worth the effort and would probably reduce the performance of the database.  Instead, using the SSD hardware layer encryption may prove useable, but this is only worth while if you think that the SSD could be physically stolen.  SSD hardware encryption doesn't prevent an intruder at the O/S level from seeing the data.

Is encrypting the HANA DB backups (data and logs) feasible?  Yes this is definitely something that should be employed.  There are 3rd party backup tools that will take a copy of the HANA backup files, then encrypt and store elsewhere.  But these tools may not support secure wipe, so the disk location of the backup files would potentially still contain the data.  Using backup tools certified for SAP with the BackInt interface would be better, since the data doesn't touch the disks, it's piped straight to the backup device via the "network".

There is possibly some slight performance impact from encrypting the data volumes.  The data files are written to at least every 5 minutes.  They are also read from during HANA start up.  These I/O operations will be the same speed as they were before, except there will be some additional software routines involved to perform the encryption, which will use slightly more memory and mean a slight delay between the data being read/written to/from the disk.

Be aware of bugs in the backup/restore process.
I've seen mention of bugs in other software products where an encrypted backup is void due to a software bug.  Ensure that you test your solution thoroughly.

SAP recommend that you change the Page Encryption Key (used to encrypt the data volumes) regularly as per your organisations standards.  This may yet again increase the allocated size.

Summary:
- Data volume encryption is easy and fast in HANA.  There's not really any reason to not implement it.  Beware of implementing in an already populated database and ensure you change the keys regularly.
- Backups and transaction log volumes are not encrypted and for the logs there's a good reason why you may not want to.
- Performance in certain scenarios could be affected slightly.
- You should attempt to implement a supported Backint capable backup product with encryption, since the backups with this method don't touch the unencrypted disks.
- Encryption can be performed and validated at the SQL command level.
- Be aware that data volume encryption in HANA will require more disk space to be allocated to the data volumes, increasing the footprint of the HANA database by as much as 50%.

SAP HANA Backup Allocation Failed

During a HANA backup, you get an "Allocation Failed" error.
This is caused by a lack of memory.  If possible, increase the memory available to HANA by increasing the free memory at the O/S level (e.g. shutting down other HANA instances), or increase the global allocation limit.

Thursday, June 19, 2014

HANA OOM Error Tip #1 - Partition Tables Correctly

If your HANA system is regularly experiencing OOM (Out Of Memory) errors, then there are a number of things that you can do to try and reduce memory consumption.

Tip #1:  Partition Large Tables Correctly
If there are large Column Store tables in your HANA system, you should partition them.
Whilst this is an absolute must in a HANA scale-out scenario (for enabling multi-node parallelism), it might not be so obvious that it can also help in a single node HANA system.
Partitioning a column table means that only the required partitions of the table are loaded into memory when accessed, you would think.

Partitioning a large table into smaller chunks will therefore help to reduce the memory usage of the table during SQL queries and also during updates.
During updates, each partition gets its own delta cache area.

Choosing how to partition a table is slightly more difficult and will depend on whether the table is a custom table, SAP standard table or other.
Plus, you will need to know what and how queries or updates are executed against the table.  A period of monitoring is suggested, to enable to you collect the required information to be able to make a decision.

One thing you should try to do, is partition the table using the most logical, selective columns.
Read on for a simple example with a twist!

A simple example, a single node HANA system has 1 large column table T1.
The table is partitioned into ranges based on the date column INVOICEDATE:

CREATE COLUMN TABLE "DARRYL"."T1" ("INVOICEREF" VARCHAR(1) NOT NULL ,
     "INVOICEDATE" DAYDATE CS_DAYDATE NOT NULL ) UNLOAD PRIORITY 5 AUTO MERGE WITH PARAMETERS ('PARTITION_SPEC' = 'RANGE year(INVOICEDATE) 2000-2001,2001-2002,2002-2003,*')
;
CREATE UNIQUE INDEX "I1" ON "DARRYL"."T1" ( "INVOICEREF" ASC ) NONLEAF PARTIAL KEY LENGTH 1;

As you can see, I've created 3 partitions by year:  2000 to 2001,  2001 to 2002 and 2002 to 2003.
This will actually create 4 partitions:  year 2000,  year 2001, year 2002 and year <OTHER>.

HANA Table Distribution

Insert 5 records into the table:

insert into darryl.t1 (INVOICEREF,INVOICEDATE) values('1','2000-01-01')
insert into darryl.t1 (INVOICEREF,INVOICEDATE) values('2','2001-01-01')
insert into darryl.t1 (INVOICEREF,INVOICEDATE) values('3','2002-01-01')
insert into darryl.t1 (INVOICEREF,INVOICEDATE) values('4','2003-01-01')
insert into darryl.t1 (INVOICEREF,INVOICEDATE) values('5','2004-01-01')

Inside the Table Distribution tab, you will now see that the records have been inserted according to their values into the respective partitions (see Raw Record Count field on the right):

HANA Table Distribution

The last two records for year 2004 and 2003 are in the fourth partition.
You can also see that each partition has a Delta Size, and that the Delta Size for the fourth partition with the most records, is larger than the other partitions.
Unload the table from memory:

HANA Unload Table from memory

Refreshing the Table Distribution tab now shows the table partitions to have zero size in memory:

HANA Table Distribution

Now select the records for the years 2004 and 2005 only:

select * from darryl.t1 where invoicedate in ('2004-01-01','2003-01-01')

Refreshing the Table Distribution tab now shows the tables to have non-zero size in memory for ALL partitions!

HANA Table Distribution

All of the records from all of the partitions appear to be loaded!
What went wrong?
Well, it's simple, we didn't create an index on the column INVOICEDATE.
This forced HANA to scan through the entire table to access the required records, meaning that it needed to load them all into memory.

Let's create an index in INVOICEDATE:

CREATE UNIQUE INDEX "I1" ON "DARRYL"."T1" ( "INVOICEDATE" ASC ) NONLEAF PARTIAL KEY LENGTH 1;

Unload the table from memory:

HANA Unload Table from memory

Refreshing the Table Distribution tab now shows the tables to have zero size in memory:

HANA Table Distribution

Now re-execute the SELECT statement:

select * from darryl.t1 where invoicedate in ('2004-01-01','2003-01-01')

Once again, on the Table Distribution tab, we can see that it has accessed all partitions, AGAIN!:

HANA Table Distribution

What went wrong this time?  Well, HANA doesn't yet have any statistics on the table data, so it simply ignored the index.
If you now unload the table from memory once again (we haven't done anything else):

HANA Unload Table from memory

Now re-execute the SELECT statement:

select * from darryl.t1 where invoicedate in ('2004-01-01','2003-01-01')

Look at the Table Distribution tab:

HANA Table Distribution

You can see that HANA has now only accessed the final partition of the table.  The other partitions have not been loaded into memory.
At first I thought this feature might be due to statistics, so I tried removing them from the table T1 (drop statistics on T1;).  Then I retried the process of unloading and re-running the query.  This had no effect, HANA correctly went straight to the fourth partition.
This left me with one other option, the Plan Cache.

Clearing the Plan Cache using:

ALTER SYSTEM CLEAR SQL PLAN CACHE

I then re-ran the test by unloading the table from memory:

HANA Unload Table from memory

Re-executing the SQL SELECT:

select * from darryl.t1 where invoicedate in ('2004-01-01','2003-01-01')

Look at the Table Distribution tab:

HANA Table Distribution

Bingo!
The Plan Cache was storing some form of execution plan statistics that meant that it was accessing the fourth partition straight away.
Each time the table is unloaded, the statistics from the existing Plan Cache remain and are re-used upon next execution of the query, which means HANA is able to go straight to the fourth partition.

Summary:
Partitioning is a great way of parallelising access to a table in HANA.
It also serves to help reduce memory by only loading specific partitions into memory when they are required.
In order to effectively use partitioning, you need to partition on an indexed column.
The initial access of a newly partition table with a new index, does not enable the benefits of partition until the second subsequent access of the table due to the Plan Cache.  A method/process of pre-loading the execution plan/statistics into the cache is required.

Thursday, June 12, 2014

HANA - Emergency Shutdown

For testing purposes, you can perform an emergency shutdown of the HANA indexserver forcing it to simply crash out.
This will require a DB recovery (automatic) upon restart!

Use the hdbcons utility (as <sid>adm) to first activate the crash functionality, then invoke the crash:

> hdbcons
> crash activate
> crash emergencyshutdown

Inside the indexserver trace file (from diagnosis view in HANA Studio), you will be able to see the following text towards the end of the file "ExternalCommandHandler.cpp (00958)  NOTE: INTENTIONAL CRASH: errorExitWithoutCrashdump".

Once the restart is complete, you can verify that recovery was required from the indexserver trace file:

"LoggerImpl.cpp(00933) : Replayed 130944B (0MB) of log in 17.4729 seconds; 0.00714693MB/s; max known TID=50175"

Thursday, June 05, 2014

Native or Direct HANA Connection

When using the SAP HANA Studio, you can configure the connection to the HANA DB as NATIVE or DIRECT (from Window -> Preferences -> General -> Network Connections).
When NATIVE is selected, this means that the internet connection details of the native Operating System are used.

In Windows this is usually the same as the "Internet" settings configured through Windows Control Panel, or through Internet Explorer's options menu.

In this setting, HANA Studio will utilise whatever proxy server you have configured for internet access.
When DIRECT is selected, any proxy settings are controlled from within HANA Studio and any Operating System settings are ignored.

This is the reason that SAP say to select DIRECT whenever you experience connectivity issues.
Are there any performance benefits from DIRECT?   maybe.  Since the HANA Studio doesn't need to query for any proxy server settings in the Windows registry, it's possible that the initial connection could be established quicker.  Once connected it probably doesn't make any performance improvement over NATIVE.

Thursday, May 29, 2014

HANA Memory Allocation Limit Change With hdbcons for HANA

The SAP HANA command line utility hdbcons can be used to administer the HANA system directly from the Linux operating system command line.

It's a very powerful utility and I'm sure as time goes by, it will provide invaluable information and functionality for HANA DB administrators to help fine-tune the database.

Log in as the <sid>adm Linux user and run the hdbcons command.
It will automatically connect the indexserver for that instance (if it's running).
You can then list the available server commands:

hana01:/usr/sap/H10/HDB10> hdbcons
SAP HANA DB Management Client Console (type '\?' to get help for client commands)
Try to open connection to server process 'hdbindexserver' on system 'H10', instance '10'
SAP HANA DB Management Server Console (type 'help' to get help for server commands)
Executable: hdbindexserver (PID: 4092)
[OK]
--
>
> help

Available commands:
ae_checksize - Check and Recalculate size of columns within the Column Store
authentication - Authentication management.
bye - Exit console client
cd - ContainerDirectory management
checktopic - CheckTopic management
cnd - ContainerNameDirectory management
context - Execution context management (i.e., threads)
converter - Converter management
crash - Crash management
crypto - Cryptography management (SSL/SAML/X509).
deadlockdetector - Deadlock detector.
debug - Debug management
distribute - Handling distributed systems
dvol - DataVolume management
ELF - ELF symbol resolution management
encryption - Persistence encryption management
event - Event management
exit - Exit console client
flightrecorder - Flight Recorder
help - Display help for a command or command list
log - Show information about logger and manipulate logger
mm - Memory management
monitor - Monitor view command
mproxy - Malloc proxy management
mutex - Mutex management
output - Command for managing output from the hdbcons
pageaccess - PageAccess management
page - Page management
pcm - Performance Counter Monitor management
profiler - Profiler
quit - Exit console client
replication - Monitor data and log replication
resman - ResourceManager management
runtimedump - Generate a runtime dump.
savepoint - Savepoint management
snapshot - Snapshot management
statisticsservercontroller - StatisticsServer internals
statreg - Statistics registry command
stat - Statistics management
tablepreload - Manage and monitor table preload
tracetopic - TraceTopic management
trace - Trace management
version - Version management
vf - VirtualFile management

For each of the above commands, additional help can be obtained by re-issuing "help" plus the command name e.g.:

> help mm

Synopsis: mm <subcommand> [options]: Memory management Available subcommands:
   - list|l [-spf] [-l <level>] [<allocator_name>]: List allocators recursively
      -C: work on composite allocators
      -s: Print some allocator statistics (use stat command for full stats)
      -p: Print also peak usage statistics
      -f: Print allocator flags
      -S: Sort output descending by total size
      -l <level>: Print at most <level> levels
   - flag|f [-C] <allocator_name> [-rsdt] <flag_set>: Set allocator flags
      -C: work on composite allocators
      -r: Set flags recursively
      -s: Set given flag(s), default
      -d: Delete given flag(s)
      -t: Toggle given flag(s)
      -a: Also apply changes to associated composite allocators (not allowed in context with '-C')
   - info|i [-f] <address>: Show block information for a given address
      -f: Force block info, even if block not known
   - blocklist|bl [-rtsS] <allocator_name> [-l <count>]: List of allocated blocks in an allocator
      -C: work on composite allocators
      -r: Show blocks in sub-allocators recursively
      -s: Show also allocation stack traces, if known. Cannot be combined with optiont '-t'.
      -S: Show only blocks with known allocation stack traces. Cannot be combined with optiont '-t'.
      -t: Show top allocation locations sorted by used size (descending).
            Cannot be combined with options '-s' or '-S'.
            The default number of printed locations is 10, this can be changed by '-l' option.
      -l <count>: Limit to <count> locations. Valid only if combined with option '-t'. Unlimited in cas <count> = 0.
   - maplist|ml: List all mappings used by the allocator
   - mapcheck|mc [-sln]: Check all mappings used by the allocator
      -s: Also show all system mappings
      -l: Also show own mappings as in maplist
      -n: Suppress output of known alloc stack traces for unaccounted memory
   - mapexec|me [-u]: List all known executable module mappings
      -u: Update list of known executable modules
   - reserveallocator|reserveallocators: Print information about OOM Reserve Allocators
   - top [-C] [-sctr] [-l <count>] [-o <fn>] [<allocator_name>]: List top users of memory, <allocator_name> is needed except for option '-t'
      -C: work on composite allocators
      -s: Sort call stacks by used size (default)
      -c: Sort call stacks by call count
      -t: Throughput, sort nodes by sum of all allocations
      -r: work recursively also for all suballocators
      -l <count>: Only output top <count> stack traces (0=unlimited)
      -o <fn>: Specify output file name
   - callgraph|cg [-sctrk] [-l <count>] [-o <fn>] [<allocator_name>]: Generate call graph, <allocator_name> is needed except for option '-t'
      -C: work on composite allocators
      -s: Sort nodes by used size (default)
      -k: Ouput in kcachegrind format
      -c: Sort nodes by call count
      -t: Throughput, sort nodes by sum of all allocations
      -r: Work recursively also for all suballocators
      -d: Do full demangle (also parameters and template arguments)
      -l <count>: Only output top <count> functions (0=unlimited)
      -o <fn>: Specify output file name (for .dot graph)
   - resetusage|ru [-r] [<allocator_name>]: Reset call stack usage counters, <allocator_name> is needed except for option '-r' or '-C'
      -C: work on composite allocators
      -r: Work recursively also for all suballocators
   - limit [-s <size>[K|M|G|T]]: Get current allocation limit in bytes
      -s <size>[K|M|G|T]: Set current allocation limit in bytes, KB, MB or GB
   - globallimit [-s <size>[K|M|G|T]]: get current global (if IPMM is active) allocation limit in bytes
      -s <size>[K|M|G|T]: Set current global (if IPMM is active) allocation limit in bytes, KB, MB or GB
   - garbagecollector|garbagecollection|gc [-f]: Return free segments/blocks to
     operating system
      -f: Also return free fragments in big blocks
   - ipmm: Print Inter Process Memory Management information
      -d: Print detailed information.
   - compactioninfo, ci: Print information about last compaction
   - virtual: Print information about virtual but not resident memory (linux only)
   - requested: Print information about requested allocations (reporting no overhead at all), iterates over all instances of ReportRequestedAllocators
   - blockedmemory [-s <size>[K|M|G|T]]: Get current blocked memory.
      -s <size>[K|M|G|T]: Set current blocked memory in bytes, KB, MB or GB and try to reserve this memory. Common options and arguments:
   - <allocator_name>: Name of the allocator to address
   - <flag_set>: Comma-separated list of following flags: ffence (fence front, writes the pattern 0xaaccaacc in front of the allocated block),
     bfence (fence back, writes the pattern 0xaaccaacc behind the allocated block), astrace (stack trace at allocation),
     dstrace (stack trace at deallocation), areset (overwrite at allocate with pattern 0xd00ffeed),
     dreset (overwrite at deallocate with pattern 0xdeadbeef), all, none, default, !emptyok (allow
     non-empty destruction), preventcheck (prevent changing check flags)
     atrace (trace at allocation), dtrace (trace at deallocation),
     malf (malfunction) or their 2-letter shortcuts [OK]
--
>

As an example, the current global allocation limit can be displayed:

> mm globallimit
Current global allocation limit=15032385536B.
[OK]
--

We can adjust the global allocation limit online by issuing the additional "-s" parameter:

> mm globallimit -s 16G
Current global (if IPMM active) allocation limit: 17179869184B
[OK]
--
>

Now re-check:

mm globallimit
Current global allocation limit=17179869184B.
[OK]
--
>

What's the current global allocation limit  in the global.ini you might ask?

hana01:/usr/sap/H10/HDB10> grep '^global'  /usr/sap/H10/SYS/global/hdb/custom/config/global.ini

global_allocation_limit = 14336

It hasn't changed.

So we have confirmed that we can affect the configuration of the HANA system in real-time using hdbcons, but we don't necessarily preserve the configuration.
You can also check in HANA Studio on the landscape page.
Since each service (process) takes it's memory allocation percentage from the global allocation, this will automatically change in real-time too.
This means that for analysing "what-if" style scenarios or for operational acceptance testing, you can effectively present a set of configuration values and work through them almost automatically.  Invaluable for those test automation guys in the field.


Thursday, May 22, 2014

HowTo: Using DBMS_STATS to Restore Stats

Scenario: You're about to implement some major changes to the Oracle database that will adjust the database statistics.
Oracle provide the DBMS_STATS package to help administer stats.  The package includes some procedures that can be used to export/import stats, plus restore them to a previous point in time.

When statistics are updated using DBMS_STATS.GATHER_*_STATS, it saves the previous version in the database (can be changed with DBMS_STATS.ALTER_STATS_HISTORY_RETENTION procedure).  Also, see table DBA_TAB_STATS_HISTORY.

These versions are retained for a specific retention period, which you can check using the GET_STATS_HISTORY_RETENTION procedure:

SQL> set serveroutput on
SQL> DECLARE
v number;
BEGIN
v := DBMS_STATS.GET_STATS_HISTORY_RETENTION;
DBMS_OUTPUT.PUT_LINE('Stats history retention: ' || v || ' days.');
END;
/

Stats history retention x days.

PL/SQL procedure successfully completed.

You can also check the date of the oldest stats history:

SQL> set serveroutput on
SQL> DECLARE
v timestamp;
BEGIN
v := DBMS_STATS.GET_STATS_HISTORY_AVAILABILITY;
DBMS_OUTPUT.PUT_LINE('Oldest stats history: ' || v);
END;
/

Oldest stats history: 15-DEC-13 11.29.32.143688 PM

PL/SQL procedure successfully completed

To restore the statistics you can use one of the relevant procedures:

DBMS_STATS.RESTORE_DICTIONARY_STATS
DBMS_STATS.RESTORE_FIXED_OBJECT_STATS
DBMS_STATS.RESTORE_SCHEMA_STATS
DBMS_STATS.RESTORE_SYSTEM_STATS
DBMS_STATS.RESTORE_TABLE_STATS

See here for parameters:
http://docs.oracle.com/cd/E18283_01/appdev.112/e16760/d_stats.htm#insertedID2
As an example, the RESTORE_SCHEMA_STATS procedure takes the following parameters:

ownname   Schema owner,
timestamp   Timestamp,
force   TRUE|FALSE   Restore even if stats are locked, default TRUE,
no_invalidate   TRUE|FALSE   Invalidate cursors, default get_param('NO_INVALIDATE').

If the stats are restored to a specific timestamp, it means that whatever statistics values were applicable to a specific table at a specific point in time, are applied to the tables.  If the table's statistics are not changed then there will be gaps in the history.
You can imagine this being like a roll-forward through the DBA_TAB_STATS_HISTORY table, until the timestamp specified.

WARNING: If the table's statistics are not changed then there will be gaps in the history.  In which case, you may not be able to restore previous statistics if the table stats have not changed within the last history window (default 31 days).

Some great examples are here: http://www.morganslibrary.org/reference/pkgs/dbms_stats.html

You should also note, that under an SAP system, the Oracle stats gatherer is called by BR*Connect, and note that it calls the GATHER_TABLE_STATS procedure for each table that is mentioned in table DBSTATC for tables that have stats enabled in DBSTATC.
If the table is not enabled to collect stats, then it may have stats delivered by SAP (see SAP note 1020260), in which case, there may not be any history.

Also see my blog post on SAP statistics and DBSTATC.

Tuesday, May 20, 2014

SAP HANA - Migrate Statistics Server 1917938

Since SAP note "1917938 - Migration of statistics server with upgrade to SPS 7" seems to be going missing rather a lot, I've noted the content here for reference based on v10 05-05-2014.

If you do not monitor or administrate the upgraded SAP HANA database with the DBA Cockpit or Solution Manager, you can activate the new statistics server. If the DBA Cockpit or Solution Manager is active, you are only allowed to activate the new statistics server if you observe SAP Note 1925684.

A configuration change is required to activate the new statistics server:

nameserver.ini -> [statisticsserver]->active=true

The data held in the persistence of the statistics server is now transferred to the persistence of the master index server. At the end of the migration, the statistics server is automatically stopped and removed from the database configuration (topology). The functions of the statistics server are distributed to other services.

The migration of the statistics server is carried out without interrupting the backup history, which means that data and log backups created before the migration can still be used to restore the SAP HANA database.

The HANA instance must not be restarted during the migration.
The migration is completed when no statisticsserver process is running in the HANA instance.
It is not necessary to restart the HANA instance following the migration.

Thursday, May 15, 2014

RMAN 10.2 Block Corruption Checking - Physical, Logicial or Both

It's an old topic, so I won't dwell on the actual requirements or the process.

However, what I was not certain about, was whether RMAN in 10.2 (10gR2) would perform both physical *and* logical corruption checking if you use the command:

RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE;

I kept finding various documents with wording like that found here: http://docs.oracle.com/cd/B19306_01/backup.102/b14191/rcmbackp.htm#i1006353
"For example, you can validate that all database files and archived redo logs can be backed up by running a command as follows:
RMAN> BACKUP VALIDATE DATABASE ARCHIVELOG ALL;

This form of the command would check for physical corruption. To check for logical corruption,
RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL;"


It took a while, but I found the original document from Oracle here: http://docs.oracle.com/cd/B19306_01/backup.102/b14191/rcmconc1.htm#i1008614

Right at the bottom, it confirms that ordinarily "BACKUP VALIDATE DATABASE;" would check for physical corruption.
The additional keywords "CHECK LOGICAL" will check for logical corruption *in addition* to physical corruption.

So RMAN doesn't need running twice with each validate command combination.

Thursday, May 08, 2014

HowTo: Check SAP SUM Version Without Executing

Scenario: You have extracted, or previously used and subsequently found, a SUM directory on your SAP system.
This is production and you don't want to start SUM on this system.
You want to know what version of SUM it is.

You can check the manifest.mf file for the SUM version without needing to start SUM:

> cd <SUM PATH>\SUM

> more summanifest.mf
Manifest-Version: 1.0
keyname: SUM
keyvendor: sap.com
keylocation: SAP AG
os: NTAMD64
compilation mode: UNICODE
compiled for: 64 BIT
release: 1.0
support package: 7
patch number: 2

native branch: lmt_008
java branch: lmtj_008_REL
assembly time: 2013-05-13 05:10:16
pack version: 22
pack tool version: 1.042





You are interested in the lines:
"release", "support package" and "patch number".

The example above is therefore SUM 1.0 SP07 patch 2.

Thursday, May 01, 2014

SQL Server Shrink Transaction Log Script

Below is a script that shrinks SQL Server transaction logs tested on SQL Server 2008R2.
Before running the script, you should ensure that you take a transaction log backup of your databases (which obviously means you should have already taken a full backup).  The transaction log backup will free the virtual logs from within the transaction log file.

The script simply tries to shrink the transaction log file by 25% for any databases that are not called "MASTER", "MODEL", "MSDB" or "TEMPDB".

If you wish to shrink logs by more than 25%, either change the script, or run it multiple times until it can't shrink the logs any further.

NOTE: When executing the script in SQL Management Studio, you should set the results output to "Text", so that you can see the output of the script.



USE master
GO

DECLARE @database_id    int,
        @database_name  nvarchar(128),
        @file_id        int,
        @file_name      nvarchar(128),
        @size_mb        int,
        @new_size       int;

DECLARE @dbcc_output    char(5);
DECLARE Cur_LogFiles CURSOR LOCAL FOR
  SELECT database_id,
         UPPER(DB_NAME(database_id)) database_name,
         file_id,
         name file_name,
         (size*8)/1024 size_mb
   FROM  master.sys.master_files
   WHERE type_desc = 'LOG'
     AND state_desc = 'ONLINE'
     AND UPPER(DB_NAME(database_id)) NOT IN ('MASTER','TEMPDB','MSDB','MODEL')
   ORDER BY size_mb DESC;

BEGIN
    OPEN Cur_LogFiles
    FETCH NEXT FROM Cur_LogFiles
       INTO @database_id, @database_name, @file_id, @file_name, @size_mb
    WHILE @@FETCH_STATUS = 0
    BEGIN
      -- Determine 25% of our current logfile size.
      SET @new_size = @size_mb*0.75;
      -- Set context to the database that owns the file and shrink the file with DBCC.
      PRINT 'Database: ' + @database_name + ', file: ' + @file_name + ', size: ' + CONVERT(varchar,@size_mb,25) + 'mb';
      PRINT 'Attempting to shrink file: ' + @file_name + ' by 25% to: '+ CONVERT(varchar,@new_size,25) + 'mb';

      EXEC ('USE [' + @database_name + ']; DBCC SHRINKFILE (['+@file_name+'],'+@new_size+');');

      FETCH NEXT FROM Cur_LogFiles
         INTO @database_id, @database_name, @file_id, @file_name, @size_mb
    END;

    CLOSE Cur_LogFiles
    DEALLOCATE Cur_LogFiles

END;







Tuesday, April 29, 2014

What are you buying when you purchase an SAP HANA appliance?

I get asked this question from friends who work in IT:
"What are you buying when you purchase an SAP HANA appliance?"

My answer is:  "Time".

Thursday, April 24, 2014

HowTo: SAP Kernel Patch History

Scenario: Your SAP system Kernel has been patched.
You would like to see the patch history of the system and you are running on Windows and SQL Server.

You can view the patch history for a DLL or EXEcutable (such as disp+work) by querying a table in the SQL Server database as follows (changing the <SID>):

SQL>  select * from <SID>.MSSDWDLLS
    where DLLNAME='disp+work.EXE'
order by DLLNAME, HOSTNAME, DLLPATH, LASTDATE, LASTTIME;


The results will provide a complete traceable history of the system including the previous identity of the SAP system, the different application instances and any inconsistencies in the DLL versions.

SAP Kernel Patch History SQL Server



Tuesday, April 22, 2014

SAP HANA - SSL Security Essential

The HeartBleed hack exposed the consequences of security holes in software designed to provide encryption of network traffic.
However, this doesn't mean that all encryption software has holes and it's certainly better to have some form of encryption than none at all.

I've watched numerous online demos, official training videos and worked on real life HANA instances.  All of these systems so far, have not enabled SSL (now called TLS)  between the HANA Studio and the SAP Host Agent or the HANA Studio to the HANA database.
This means that specific communication between the HANA Studio, the SAP Host Agent and the HANA database indexserver, is not encrypted.

The HTTP protocol has been around for a long time now (thanks Tim).
It is inherently insecure when using HTTP BASIC authentication, since the username and password which is passed over HTTP to a server that has requested authentication, is sent in the clear (unencrypted) but encoded in BASE64.
The BASIC authentication is used to authenticate the HANA Studio with the SAP Host Agent.

What does this mean with regards to SAP HANA and the SAP HANA Studio?
Well, it means that any user with a network packet sniffer (such as Wireshark) could intercept one vital password, that of the <sid>adm SUSE Linux user.

In a SAP HANA system, the software is installed and owned by the <sid>adm Linux user.  Usually <sid> is a unique identifier for each HANA system in a SAP landscape.  As an example, H10 or HAN or any other 3 alphanumeric combination (within certain SAP restrictions) can be used.
When the HANA Studio is used to control the HANA database instance (start up and shutdown), the HANA Studio user is prompted to enter the username and password for the <sid>adm user.
This username and password is then sent via HTTP to the SAP Host Agent installed on the HANA server.  The SAP Host Agent uses the username and password to start or stop the HANA database instance.
If the password for the <sid>adm user is obtained, it is possible for a malicious user to establish an SSH connection directly to the SUSE Linux server where the HANA instance is installed, then control the instance, or access the database directly using a command line interface for executing SQL statements.

Here's a 6-step example which took me 10 minutes to setup, trace, collect the data and then login to the Linux server as an authorised user.

Step 1, Install and open Wireshark (on your PC) and start tracing for TCP connections to the HANA server on the Host Agent TCP port 5<xx>13.
Step 2, Launch HANA Studio (on your PC) and in the navigator right click and choose "Log On":

HANA  Logon without SSL

Step 3, If you haven't elected to save the username and password during previous use of the HANA Studio, you will be prompted.  Otherwise, the system will auto-logon to the Host Agent.
Step 4, Analyse the Wireshark capture.  You're looking for the text "Authorization: Basic" in the TCP packets:

HANA Logon Wireshark trace

The actual string will look something like: 
" Authorization: Basic aDEwYWRtOmhhbmFoYW5h "
I've copied an example HTTP POST out to a text editor for easy viewing:

HANA SAPControl HTTP POST

POST /SAPControl HTTP/1.1
Accept: text/xml, text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Authorization: Basic aDEwYWRtOmhhbmFoYW5h
Content-Type: text/xml; charset=utf-8
Cache-Control: no-cache
Pragma: no-cache
User-Agent: Java/1.7.0_45
Host: hana01.fqdn.corp:51013
Connection: keep-alive
Content-Length: 248

Step 5, Decode the username and password in the BASIC authentication string using a base64 decoder.  It's possible to use an online one:

HANA SAPControl HTTP POST BASE64 decoder

The output includes the username and password in the following format:
USERNAME:PASSWORD

Step 6, With our new found details, log onto the HANA server using an SSH terminal:

HANA Server Logon

From this point onward it's possible to access any data in the HANA database using command line tools.

SUMMARY:
You MUST enable SSL (TLS) encryption of the HTTP communications between the HANA Studio and the SAP Host Agent.  Without this, you might as well put the password on a post-it note on your screen.
See http://service.sap.com/sap/support/notes/1718944

Another option would be to segregate the HANA Studio users on their own vLAN, or to firewall the SAP HANA Host Agent and HANA database indexserver ports, tying them to specific user PCs only.
Incidentally, the password for the SYSTEM user of the HANA database, is encrypted with SHA256.  The encrypted string is then compared with the already encrypted password in the HANA database in order to authenticate a user.
However, if you have not enabled SSL between the HANA Studio and the HANA database indexserver, then all the of data retrieved from the database is sent in the clear.  You don't need to authenticate to the database if you can just read the network packets.  This is true of most database connections.

Thursday, April 17, 2014

Solution Manager 7.01 MOPZ Stuck Calculating Selection

I had an issue with Solution Manager 7.01 SP24 where I had created a maintenance transaction for an SEM system (with a sidecar Java stack) and it got stuck in the "calculating" step when in the "Selection" stage.
It would just sit on the screen with the blue circular logo spinning and nothing happening.  It did not timeout and when I left it for a day, it was still not progressing.

So, I opened another one, and it got stuck at the same point:

Solution Manager MOPz Calculating Stuck

I had made a change to the Java stack technical system in SMSY to indicate that the landscape pattern was "SIDECAR" as instructed by the SAP documentation, but this just didn't seem to be working for me.

So I removed the "SIDECAR" definition and now want to cancel the two transactions:

MOPz transactions


Following SAP note 1296589, I opened transaction "/TMWFLOW/MAINTENANCE" and entered in the two "open"  transaction IDs and clicked Execute:

/TMFLOW/MAINTENANCE report


/TMFLOW/MAINTENANCE report


The SAP note goes on to say:  "If any MOPZ planning procedure is displayed in the search result with User Status other than "New", then it's the locking planning procedure.".
So we can see that we have both transactions locking the planning procedure.  Woops!

Maintain the table TSOCM_COND_MAPP using SM30 (use a user other than DDIC for this!):

Table TSOCM_COND_MAPP


Find the line entry "SLMO  SLMO0001  E0002  40  SYSTEM_ASSIGNMENT...":

Table TSOCM_COND_MAPP entries for SLMO


Change the column "MT" from "Cancel" to "Warning":

Table TSOCM_COND_MAPP entries for SLMO


Save your change.  You will need to save the change to a transport request:

image


I then re-opened the maintenance transaction from SOLUTION_MANAGER and unfortunately it was still stuck on "Calculating...".
So, the next step was to try and remove the two transactions.
The SAP notes and SCN both suggested using report CRM_ORDER_DELETE.
From SE38 I ran the report and entered the first transaction ID number (from the maintenance optimizer screen) and "Business Transaction Type" of SLMO:

Deleting SLMO entries


Deleting SLMO entries


I then went back into the Maintenance Optimizer and click Refresh:

image


It's gone!  Only one to go:

MOPz Transactions


After removing both old transactions, I went and re-modified the landscape pattern to un-link the Java stack from the ABAP stack (non-SIDECAR).

I then reset the change to the TSOCM_COND_MAPP table and saved it.
I was then able to create a new maintenance transaction and successfully calculate the stack.


Summary:
The SIDECAR landscape pattern in Solution Manager 7.01 SP24 doesn't seem to work as it should and causes issues with the Maintenance Optimizer.  For the time being, it might be easier to try and maintain the ABAP and Java stacks independently.