SAP HANA SAVE POINT : An Introduction

This blog is for beginner in HANA savepoint , if you know already how it works feel free to skip this blog and read another blog mentioned as Advanced Hana Savepoint. I have used a reference from Klaus in this blog and you can check his blog in link.

We all know, that the unique selling point of HANA is that it is in-memory database, which means that the data is stored and processed in RAM. First thing that popped in my mind after hearing this was, If it is stored in RAM what happens when you turn off the system. As RAM is volatile in nature, how is persistency maintained ?

So when persistency is concerned , SAVEPOINTS come in action. Savepoints are required to synchronize change in memory with disk level data. Savepoint is a periodic point in time , when all changed data is written in storage in form of pages, all data is flushed from memory to data volumes. 

Talking in Layman terms, how the data is saved from RAM to disk, which is a nonvolatile storage . How HANA as a database justifies the C (consistency) of the ACID properties. The answer to all this is savepoint.

All modified pages of row and column store are written to disk during savepoint. Pages can be considered as the block which stores data that will be transferred from memory to disk.

Points to Note in case of HANA Savepoint :-

  1. Each SAP HANA host and service has its own savepoint.

  2. Data that belongs to savepoint represents a consistent state of data in disk

  3. No changes are done to these savepoint until the next savepoint operation has been completed [changes are not done on the previous consistent state until the next savepoint is completed]

When are the savepoints triggered ?

  1. Savepoint interval (automatic) : During normal operations savepoints are automatically triggered after a specific time interval. This time can be controlled by defining the parameter [persistence] -> savepoint_interval_s in global.ini

The default value is 300 seconds, so savepoints are taken at interval of 300 seconds i.e. 5 mins

  1. We can trigger SAVEPOINT manually : ALTER SYSTEM SAVEPOINT

  2. Soft Shutdown

Soft shutdown triggers a savepoint that is why after soft shutdown you have a quick restart (because you have a consistent state and you don't need to process the log segment) but not the same case in Hard Shutdown (logs need not be processed from the beginning, but only from the last savepoint position.)

  1. Backup 

A global savepoint is performed before a data backup is started , A savepoint is written after the backup of a service is finished

  1. Startup 

After a consistent database state is reached during startup , a savepoint is performed

  1. Reclaim Data Volume

  2. Auto Merge Function (mergedog)

  3. Snapshots

Savepoint normally overwrites older savepoint, but it is possible to freeze savepoint that is known as snapshot. Snapshots are savepoints that are preserved for longer use and so they are not overwritten by the next savepoint.

HANA Savepoint is split into three individual stages:

Phase 1 (PAGEFLUSH): All changed pages are determined that are not yet written to disk. The savepoint coordinator triggers writing of all these pages and waits until the I/O operations are complete. Write transactions are allowed in this phase.
Phase 2 (BLOCKING): 

Majority of the savepoint is performed online without holding a lock , but the finalization of the savepoint requires a lock ( Allow me to add that if we have savepoint interval less than 5 mins we do not face immediate issue but we can face some issue as we need to hold locks at every savepoint. In real life scenario we have seen some issue with 3 mins). This step is called the blocking phase of the savepoint. It consists of two major phase 

Sub phase

Thread detail


WaitForLock : this is time we wait to get all the required locks on the table 


Before the critical phase is entered, a ConsistentChangeLock needs to be allocated by the savepoint. 

If this lock is held by other threads / transactions, the duration of this phase is increasing. 

At the same time blocking all the DML on the underlying table like INSERT, UPDATE or DELETE are blocked by the savepoint with ConsistentChangeLock.

Critical : this is a one slight moment when database is in sort of hung state “NO” operations are done in this phase. Finalization is done here


Once the ConsistentChangeLock is acquired, the actual critical phase is entered and remaining I/O writes are performed in order to guarantee a consistent set of data on disk level. 

During this time other transactions aren’t allowed to perform changes on the underlying table and are blocked with ConsistentChangeLock.

Phase 3 (POSTCRITICAL): Changes are allowed in this phase again. The savepoint coordinator waits until all asynchronous I/O operations related to the savepoint are finished and marks the savepoint as completed.

Helpful Views when we talk savepoint




Global savepoint information per host and service


Detailed information for individual savepoints




As of SAP HANA SPS 10 savepoint details are logged for THREAD_TYPE = ‘PeriodicSavepoint’ (see SAP Note 2114710).

Helpful SQL Script when we talk savepoint.

1969700 – SQL statement collection for SAP HANA , these self explanatory scripts SQL: “HANA_IO_Savepoints“ [for savepoints]and SQL: “HANA_IO_Snapshots” [for snapshot]

Known issue in Savepoint 


Thread detail


Long waitForLock phase



Long durations of the blocking phase (outside of the critical phase) are typically caused by SAP HANA internal lock contention. The following known scenarios exist


Starting with Rev. 102 you can configure the following parameter in order to trigger a runtime dump (SAP Note 2400007) in case waiting for entering the critical phase takes longer than <seconds> seconds:

indexserver.ini ->

[persistence] ->

runtimedump_for_blocked_savepoint_timeout = ‘<seconds>’

(This is not a default parameter, add this parameter manually )

Long critical phase


Delays during the critical phase are often caused by problems in the disk I/O area.

Analyzing the runtime dumps

So all the dumps are created in trace directory quick way to reach that cdtrace and get to the file indexserver_<hostname>.30003.rtedump.<timestamp>.savepoint_blocked.trc

is triggered by the parameter runtimedump_for_blocked_savepoint_timeout.

We could find the savepoint thread,
Savepoint Callstack contains “DataAccess::SavepointLock::lockExclusive”

Complete lock needs to be called

Other threads(SQL thread) waiting for the lock, Callstack contains: “DataAccess::SavepointSPI::lockSavepoint”

Runtime dump : section [SAVEPOINT_SHAREDLOCK_OWNERS]

Most time the savepoint hangs because the exclusive lock is occupied by other thread, that means that the savepoint is locked in shared manner which can be resolved by help of SAP Note 2100009

When you check owner of shared savepoint locks you will get thread that has the lock, After you get the thread id of the sharedlock owner, you can search the thread id and try to find its parent thread id. Once you find the parent you can find the issue with that and resolve the same first and then that parent process will eventually release the lock.

Runtime dump: Section :  [STATISTICS]  M_SAVEPOINTS

We check two data here 

CRITICAL_PHASE_WAIT_TIME : Large time here means, Time required to acquire the ConsistentChangeLock. This can be used to state that issue with the savepoint and also an issue with the exclusive lock. 

CRITICAL_PHASE_DURATION : Large time here means, there is an issue with the I/O.