Skip to main content

Analyzing Long Running Transaction in SAP : For Freshers

Identify the Long Running Transactions: You can use transaction code ST03 or ST03N to evaluate the overall system performance. These transactions provide an analysis of response times, giving you an overview of the total workload and the performance of the system. STAD allows you to analyze individual transactions, where you can identify the ones that are taking an extended amount of time to execute.

Analyze the Transaction: After identifying the long running transactions, the next step is to analyze them. Use ST12 transaction to get a detailed trace of individual transactions or programs. It provides a detailed analysis of where exactly in the transaction the bottleneck is happening.

Check System Performance: Transaction codes SM50 and SM66 provide an overview of the system's work processes. You can see if any work process is stuck or running for a long time, which may be due to a long running transaction.

Database Performance: The database performance can be checked using transaction code ST04. This gives you an overview of the database's performance, including buffer hit ratio, lock waits, SQL statement analysis, etc. Poor database performance can lead to longer transaction times.

Lock Entries: Long running transactions can sometimes cause table locks, which can affect other transactions. You can check for lock entries using transaction code SM12. If there are old or stuck lock entries, it might be necessary to clear them.

Update Process: The update queues can be monitored using transaction code SM13. If there are many unprocessed update requests, it could indicate a problem. You might need to analyze why updates are not getting processed and may need to restart the update work processes if required.

Work with ABAP Developers or Functional Team: If a custom program or transaction is causing the issue, you may need to work with the ABAP development team or the relevant functional team to optimize the code or transaction. They can use tools such as SQL Trace, Runtime Analysis or the ABAP Debugger to identify inefficient code or database accesses.

Consider System Resources: Check whether the system has sufficient resources such as CPU and memory to handle the load. You can use transaction code ST06 for a basic overview of the system's resource usage. If the system resources are constantly at high utilization, you might need to consider a system upgrade or load balancing.

Archiving Data: If the system has a lot of old data, it might be slowing down transactions. Data archiving can be setup to archive old data, thus improving system performance. You can use transaction code SARA to setup archiving.

Implement Proper Indexing: Check if the database tables used by the transaction are properly indexed. Indexes improve the performance of database accesses. However, creating too many indexes can slow down database write operations. Therefore, this needs to be done in cooperation with your DBA and after carefully analyzing the database accesses of your transaction. 


As a DBA , your scope increase and you need to check the following as well .


SQL Optimization: Investigate the SQL queries running for a long time using SQL trace and other database-specific tools. You may need to optimize these queries for better performance.

Database Statistics: Ensure that database statistics are up-to-date. Accurate statistics are essential for the database optimizer to decide the best execution plan for SQL queries.

Database Parameters: Review and tune database parameters. Each database system has its own set of parameters that can be adjusted for optimal performance.

Database Indexes: Check the usage of indexes in the database. Inefficient or unused indexes can be dropped. New indexes can be created where required, after careful analysis.

Database Blocks and Deadlocks: Monitor the database for any blocks or deadlocks. These can significantly slow down transactions and should be resolved promptly.

Table Partitioning: If certain database tables have grown significantly, consider table partitioning. Partitioning can help improve performance by reducing index size and making it easier for the database to manage data.

Storage Systems: Check the performance of the underlying storage systems. Slow disk I/O can significantly impact database performance.

Network Latency: In a distributed environment, network latency can impact performance. You may need to work with your network team to investigate any network issues.

Database Logs: Regularly review database logs for any signs of problems. This can include things like repeated errors, warnings about deprecated features, or messages about resource shortages.

Backup and Recovery: Ensure that backup and recovery procedures are in place and working as expected. While this may not directly improve performance, it's essential for dealing with any potential data loss situations.

Database Version and Patches: Ensure that the database software is up-to-date with the latest patches. Newer versions often come with performance improvements and bug fixes.


Here are few commonly used databases and steps specifics to that which can be checked further :


SAP HANA:

  • SQL Optimization: Use the PLAN_VISUALIZATION or EXPLAIN PLAN for SQL queries to understand their execution and optimize accordingly.
  • Database Statistics: HANA auto-updates its statistics. To manually update, use UPDATE STATISTICS command. --> Before making any update commands , be 100% sure what you are doing and consult senior consultants. 
  • Database Parameters: Review parameters in the global.ini and indexserver.ini files.
        
  • log_mode:


Tradeoff: Durability vs. Performance.

Synchronous logging ensures transaction durability but may impact the performance of long-running transactions due to the additional overhead of writing logs to disk synchronously. Asynchronous logging improves performance but may introduce the risk of data loss in case of a system failure.


  • log_auto_truncate:


Tradeoff: Data Integrity vs. Performance.

Enabling automatic log truncation improves performance by reducing the time taken for log cleanup tasks. However, it may impact data integrity as it removes logs that are required for point-in-time recovery.

  • logsegment_min_size and logsegment_size:


Tradeoff: Log Switch Frequency vs. Performance.

Setting a smaller logsegment_min_size and logsegment_size may increase the frequency of log switches, impacting long-running transactions. On the other hand, larger log segments reduce the number of switches but may lead to higher memory consumption.

  • timeout:


Tradeoff: Transaction Completeness vs. Resource Utilization.

A shorter timeout helps prevent long-running transactions from tying up system resources for an extended period. However, it may terminate transactions prematurely, leading to incomplete results or data inconsistencies.

  • statement_timeout:


Tradeoff: Query Execution Completeness vs. Resource Utilization.

Setting a shorter statement_timeout ensures that individual SQL statements complete within a defined time frame. However, complex queries or long-running transactions may be terminated before completion, potentially impacting data consistency or result accuracy.

  • memory_allocation_limit:


Tradeoff: Resource Utilization vs. Transaction Execution.

Setting a low memory_allocation_limit limits the amount of memory a transaction can use, preventing excessive resource consumption. However, it may impact the execution of memory-intensive or complex long-running transactions, leading to performance degradation or errors.

  • max_parallel_degree:


Tradeoff: Resource Contention vs. Query Performance.

Higher max_parallel_degree values allow more parallelism in query execution, potentially improving performance. However, it may introduce resource contention, especially if the system lacks sufficient resources or if multiple parallel queries compete for the same resources.

  • max_runtime:


Tradeoff: Query Execution Completeness vs. Resource Utilization.

Setting a shorter max_runtime ensures that individual SQL statements complete within a defined time limit. However, it may terminate long-running queries prematurely, potentially impacting data consistency or result completeness.


  • Database Indexes: In HANA, primary keys automatically create an index. Analyze whether other fields need indexing based on the PLAN_VISUALIZATION of your SQL queries.
  • Table Partitioning: Use the ALTER TABLE command to partition large tables. --> Before making any update commands , be 100% sure what you are doing and consult senior consultants. Overall System Health: M_SYSTEM_OVERVIEW - Gives a general overview of the system status including CPU utilization, memory usage, disk usage, system uptime, etc.
  • Monitoring: Monitor system performance with SAP HANA Studio or Cockpit. Use M_* views for specific monitoring tasks.
         Overall System Health: M_SYSTEM_OVERVIEW - Gives a general                                 overview of the system status including CPU utilization, memory                             usage, disk usage, system uptime, etc.
         Active Sessions: M_CONNECTIONS - Provides information about all                                 currently active connections to the SAP HANA database.
         CPU Usage: M_HOST_RESOURCE_UTILIZATION - Displays CPU usage                         statistics.
         Memory Usage: M_MEMORY - Provides detailed information about memory                     usage in the system. For overall memory usage, you can use                                     M_TOTAL_MEMORY_USED.
        Disk Usage: M_VOLUME_IO_TOTAL_STATISTICS - Gives you an overview                     of disk usage.
         SQL Performance: M_SQL_PLAN_CACHE - Provides details about SQL                        statements that are stored in the plan cache. You can use this view to                     analyze performance of SQL queries.
         Locks: M_LOCKS - Provides information about locks in the system.
         Alerts: M_ALERTS - Displays information about alerts generated by the                             system.
         Services: M_SERVICES - Provides information about all services running on                     all hosts of the system.

  • Long running transactions :

               Execute the following SQL statement to identify long-running transactions:

                SELECT SESSION_ID, SQL_PLAN_ID, ELAPSED_TIME_MS
                FROM M_TRANSACTIONS
               WHERE STATE = 'ACTIVE' AND ELAPSED_TIME_MS >                                                <desired_threshold>;    

               Analyze SQL plan for each such transaction

                EXPLAIN PLAN FOR STATEMENT_ID <sql_plan_id>;
                SELECT * FROM PLAN_TABLE WHERE STATEMENT_ID =                                        <sql_plan_id>;


DB2:

  • SQL Optimization: Use the db2expln tool or EXPLAIN PLAN for SQL query optimization.

            DB2EXPLN is a command-line tool provided by IBM DB2 that allows you to                        analyze and understand the access plans generated by the query optimizer for SQL                statements

                    EXPLAIN PLAN FOR <your_sql_statement>;

  • Database Statistics: Use RUNSTATS command to update database statistics.

            RUNSTATS is a DB2 utility that gathers statistical information about database                    objects, such as tables and indexes

                   RUNSTATS ON TABLE schema_name.table_name

                   RUNSTATS ON INDEX schema_name.index_name

  • Database Parameters: Review and tune database parameters using UPDATE DB CFG or UPDATE DBM CFG commands.
    • UPDATE DB CFG FOR <database_name> USING <parameter_name>                                <new_value>

      •  MAXAPPLS: Specifies the maximum number of applications (connections) allowed to connect to the database simultaneously. A low value may result in contention and blocking during high load, potentially affecting long-running transactions.
      • LOCKTIMEOUT: Determines the maximum time (in seconds) that a transaction will wait for a lock before timing out. A shorter timeout may cause transactions to be terminated prematurely.
      • LOGFILSIZE: Defines the size of the active log file in bytes. If the log files are too small, it can lead to frequent log file switches, which can impact transaction performance.

    • UPDATE DBM CFG USING <parameter_name> <new_value> these are certain parameters that needs to be checked on priority as we have some knowledge this can indirectly/directly have effects on Long running transaction.

      • MAXAGENTS: Determines the maximum number of agents (database processes) that can be active simultaneously in the DB2 instance. A low value may result in resource contention and potentially impact long-running transactions.
      • HEALTH_MON: Enables or disables the health monitor, which monitors the health of the instance and can detect and terminate long-running or runaway transactions.
      • SHEAPTHRES: Specifies the threshold for sort heap allocation. A low value may cause frequent disk I/O for sorts, impacting transaction performance.

  • Database Indexes: Use the CREATE INDEX command to create indexes on frequently accessed columns. --> this will generally require a wider audience for discussion and all the stakeholders need to be informed
        Read more about it in different blog.

  • Table Partitioning: Use CREATE TABLE command with PARTITION BY clause to partition large tables.
        Read more about it in different blog.

  • Monitoring: Use db2pd, LIST APPLICATIONS, or GET SNAPSHOT commands for monitoring database.
            The db2top command provides a real-time interactive interface to monitor                various aspects of DB2, including active transactions.
            
            The db2pd command allows you to gather detailed diagnostic information                about a running DB2 instance.

                           db2pd -transactions

            MON_GET_ACTIVITY : 

            SELECT APPLICATION_HANDLE, AGENT_ID, AGENT_TYPE,                                SECONDS_ACTIVE
            FROM TABLE(MON_GET_ACTIVITY(NULL, -1)) AS T
            WHERE AGENT_TYPE = 'UOW'
            ORDER BY SECONDS_ACTIVE DESC;

            SELECT AGENT_ID, AGENT_TYPE, SECONDS_ACTIVE
            FROM SYSIBMADM.SNAPAPPL_INFO    
            WHERE AGENT_TYPE = 'UOW'
            ORDER BY SECONDS_ACTIVE DESC;


                

MSSQL:


SQL Optimization using SQL Server Profiler or Execution Plans:


  • SQL Server Profiler: This is a graphical user interface to SQL Trace for monitoring an instance of the Database Engine or Analysis Services. You can capture and save data about each event to a file or table to analyze later.

  • Execution Plans: In SQL Server Management Studio, after writing your SQL query in the query editor, you can click on the "Display Estimated Execution Plan" icon or press Ctrl + L to see the estimated execution plan. For the actual execution plan, you can click on the "Include Actual Execution Plan" icon or press Ctrl + M.


  • Database Statistics using the UPDATE STATISTICS command:


UPDATE STATISTICS table_name;

This command updates the statistics of the specified table or indexed view.


Review and tune database parameters using SQL Server Management Studio (SSMS):

You can use SSMS to access, configure, manage, administer, and develop all components of SQL Server. It includes both script editors and graphical tools. You can review parameters such as memory allocation, max degree of parallelism, etc.


Database Indexes using the CREATE INDEX command:


CREATE INDEX index_name ON table_name (column1, column2, ...);


Replace index_name with the name you want to give the index, table_name with the name of the table on which you want to create the index, and column1, column2, etc. with the columns you want to include in the index.


  • Table Partitioning using CREATE PARTITION SCHEME and CREATE PARTITION FUNCTION commands:


CREATE PARTITION FUNCTION partition_function_name (input_parameter_type) AS RANGE LEFT FOR VALUES (value1, value2, ...);


CREATE PARTITION SCHEME partition_scheme_name

AS PARTITION partition_function_name

TO (filegroup1, filegroup2, ...);

Replace partition_function_name with the name for the partition function, input_parameter_type with the data type of the column on which you want to partition the table, value1, value2, etc. with the boundary values for the partitions, partition_scheme_name with the name for the partition scheme, and filegroup1, filegroup2, etc. with the filegroups to hold the partitions.


  • Monitoring using SQL Server Management Studio, Dynamic Management Views (DMVs), or Performance Monitor:


SQL Server Management Studio: You can use Activity Monitor SSMS to monitor current activity on the database.


Dynamic Management Views: DMVs return server state information that can be used to monitor the health of a server instance, diagnose problems, and tune performance. For example, sys.dm_exec_requests can be used to see what requests are currently executing.


Performance Monitor: This is a Microsoft Management Console (MMC) snap-in that provides tools for analyzing system performance. You can monitor SQL Server counters for a comprehensive view of SQL Server performance.

Please replace the placeholders (table_name, column1, index_name, etc.) with the actual values based on your database schema.



Comments

You might find these interesting

How to properly Start/Stop SAP system through command line ?

Starting/stopping an SAP system is not a critical task, but the method that most of us follow to achieve this is sometimes wrong. A common mistake that most of the SAP admins do is, making use of the 'startsap' and 'stopsap' commands for starting/stopping the system.  These commands got deprecated in 2015 because the scripts were not being maintained anymore and SAP recommends not to use them as many people have faced errors while executing those scripts. For more info and the bugs in scripts, you can check the sap note 809477.  These scripts are not available in kernel version 7.73 and later. So if these are not the correct commands, then how to start/stop the sap system?  In this post, we will see how to do it in the correct way. SAP SYSTEM VS INSTANCE In SAP, an instance is a group of resources such as memory, work processes and so on, usually in support of a single application server or database server with

sapstartsrv is not started or sapcontrol is not working

 What is sapstartsrv ? The SAP start service runs on every computer where an instance of an SAP system is started. It is implemented as a service on Windows, and as a daemon on UNIX. The process is called  sapstartsrv.exe   on Windows, and   sapstartsrv   on UNIX platforms. The SAP start service provides the following functions for monitoring SAP systems, instances, and processes. Starting and stopping Monitoring the runtime state Reading logs, traces, and configuration files Technical information, such as network ports, active sessions, thread lists, etc. These services are provided on SAPControl SOAP Web Service, and used by SAP monitoring tools (SAP Management Console,  SAP NetWeaver  Administrator, etc.). For more understanding use this link : https://help.sap.com/doc/saphelp_nw73ehp1/7.31.19/enUS/b3/903925c34a45e28a2861b59c3c5623/content.htm?no_cache=true How to check if it is working or not ? In case of linux , you can simply ps -ef | grep sapstartsrv In case of windows, you need

HANA System Replication - Prerequisites & Setup

Hey Folks! Welcome back to Hana high availability blog series. In our last blog we checked out operation & replication modes in hana system replication. If you haven't gone though that blog, you can checkout  this link In this blog we will be talking about the prerequisites of hana replication and it's setup. So let's get started. When we plan to setup hana system replication, we need to make sure that all prerequisite steps have been followed. Let's have a look at these prerequisites. HANA System Replication Prerequisites: Primary & secondary systems should be up & running HDB version of secondary should be greater than or equal to Primary database sever But, for Active/Active(read enabled config), HDB version should be same on both sites. System configuration/ini files should be identical on both sides Replication happe

ST03N : The chapter for all BASIS Admins

This blog is targeted to BASIS ADMINS Transaction for workload analysis statistical data changed over time are monitored using transaction code ST03 , now ST03N (from SAP R/3 4.6C) . With SAP Web AS 6.4 the transaction ST03 is available again. From time to time ST03 and ST03N has seen many changes but later in SAP NW7.0 ST03N has reworked in detail specially processing time is now shown in separate column. Main Use of ST03N  is to get detailed information on performance of any ABAP based SAP system. Workload monitor analyzes the statistical data originally collected by kernel. You can compare or analyze the performance of a single application server or multiple application server. Using this you start checking from the entire system and finding your way to that one application server and narrowing down to exact issue. By Default :- You see data of current day as default view , you can change the default view. Source of the image : sap-perf.ca Let's discuss the WORKLOAD MONITOR By D

HANA hdbuserstore

The hdbuserstore (hana secure user store) is a tool which comes as an executable with the SAP Hana Client package. This secure user store allows you to store SAP HANA connection information, including user passwords, securely on clients. With the help of secure store, the client applications can connect to SAP HANA without the user having to enter host name or logon credentials. You can also use the secure store to configure failover support for application servers in a 3-tier scenario (for example, SAP Business Warehouse) by storing a list of all the hosts that the application server can connect to. To access the system using secure store, there are two connect options: (1)key and (2)virtualHostName. key is the hdbuserstore key that you use to connect to SAP HANA, while virtualHostName specifies the virtual host name. This option allows you to change where the hdbuserstore searches for the data and key files. Note

How to resolve Common Error : Standard Template "sap_sm.xls" missing

Hey everyone, putting forward a common error we usually face when we have “ Excel inplace” functionality enabled in our SAP system. This error occurs when validity of the signature of SAP standard templates expired or were incorrectly delivered via support packages. We can reproduce the error by doing as below.. Click on “spreadsheet” icon after any SAP ALV grid view of data is on screen to make this data to export into excel directly from SAP.

Work Process and Memory Management in SAP

Let’s talk about the entire concepts that are related to memory when we talk about SAP Application. Starting with few basic terminologies, Local Memory :  Local process memory, the operating system keeps the two allocation steps transparent. The operating system does the other tasks, such as reserving physical memory, loading and unloading virtual memory into and out of the main memory. Shared Memory :  If several processes are to access the same memory area, the two allocation steps are not transparent. One object is created that represents the physical memory and can be used by various processes. The processes can map the object fully or partially into the address space. The way this is done varies from platform to platform. Memory mapped files, unnamed mapped files, and shared memory are used.  Extended Memory : SAP extended memory is the core of the SAP memory management system. Each SAP work process has a part reserved in its virtual address space for extended memory. You can set

SAP system migration blog series - part 1: migration overview

Summary : This blog is part 1 of the blog series on SAP system migration. In this blog, we will provide overview about SAP migration, types of migration, their differences and usage scenario. SAP migration overview :   As the Greek philosopher, Heraclitus, said: “change is the only constant.” Same goes within SAP world too, often customer have to change the SAP systems along with its underlying components to meet the changing requirements, be it change from old hardware to new one, changing operating system, database. This change in SAP system components (DB, OS or Hardware) is termed as migration. Before we go into details of migration, let’s understand architecture of a typical SAP system.   An SAP system consist of SAP application instances, running on database (DB), hosted on operating system (OS), provisioned on hardware. Change in any one or more of these underlying components (DB, OS or hardware) warrant us to perform migration. Types of migration:   Broadly, there are two types

ABAP Dumps Analysis

Ever now and then have you heard about ABAP Dumps, We also have a joke everything in temporary in life except ABAP dumps for SAP Consultants. Lets try to understand ABAP dumps from perspective of a SAP BASIS Consultant. Dumps happen when an ABAP program runs and something goes wrong that cannot be handled by the program We have two broad categories of Dumps , In custom program Dumps and SAP provided program Dumps. Dumps that happen in the customer namespace ranges (i.e. own-developed code) or known as Custom Program , can usually be fixed by the ABAP programmer of your team. Dumps that happen in SAP standard code probably need a fix from SAP. You do not have to be an "ABAPer" in order to resolve ABAP dump issues. The common way to deal with them is to look up in ST22 How to correct the error ? Hints are given for the keywords that may be used to search on the note system. Gather Information about the issue  Go to System > Status and Check the Basis SP level as well as info

Complete Guide : XPI Inspector Tool

Content of this blog :           What is an XPI Inspector Tool ? Why XPI Tool is used ? XPI standard URL How to check XPI Tool version ? How to Install/Update XPI version using TELNET How to Use XPI Tool ? References – SAP Notes What is XPI Inspector tool ?          -    XPI Inspector is a diagnostics web application developed by SAP that collects logs and debug traces from various PI components in a very simple way and is useful for SAP PI consultants, developers, and administrators to get more insights on an issue. Why XPI is used  – 1.  Used to collect traces and logs from Messaging system or XI module. 2. Used to collect the related information to solving the issues or improving the PI or PO systems’ performance. 3. Using XPI Inspector application you will be able to collect a lot of information about your system that will help you to learn about problems in the past, to analyze new and detect such at an early stage. 4. Performs certain number of configuration checks, such as SSL c