Tracing latches
Latch waits are well instrumented in Oracle database. We can see for how long a process was waiting for a latch. We can also identify the latch holder. But it’s difficult to observe a problem from the blockers perspective. We don’t know for how long the blocker was holding the latch. We also don’t have a broader context.
I closed this gap with the DTrace script trace_latches.d. The script traces the calls to the relevant Oracle C functions. In particular, latches are acquired and released by calling kslgetl and kslfre, respectively [1]. The latch address is passed by the first argument. DTrace prints the latch address and the call stack whenever a latch is requested or released.
For instance, I was tracing a process which executed a simple insert followed by a commit:
dtrace -s trace_latches.d -p 8451 > latches.trc
@whoami
SADDR SPID SID SERIAL# PID
---------------- ------------------------ ---------- ---------- ----------
PROCESS
------------------------
00000000C859B3A8 8451 5214 50407 13
7820
insert into t values (1);
commit ;
Below is the excerpt from the trace file:
kslgetl: 0X14EE2C028 0 0x0 3925
oracle`kslgetl
oracle`kcrfw_redo_gen_ext+0x39f
oracle`kcb_commit_main+0xc11
oracle`kcb_commit+0x107
oracle`ktiCommitCleanout+0x257
oracle`kticmt+0x64c
oracle`ktucmt+0x21f
oracle`ktcCommitTxn_new+0x402
oracle`ktcCommitTxn+0x5b
oracle`ktdcmt+0x86
oracle`k2lcom+0x99
oracle`k2send+0x514
oracle`xctctl+0x64
oracle`xctCommitTxn+0x305
oracle`kksExecuteCommand+0x4fd
oracle`opiexe+0x339a
oracle`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`ttcpip+0x5f3
oracle`opitsk+0x9ec
-----------------
kslfre: 0X14EE2C028 0 0x0 5135376848
oracle`kslfre
oracle`kcrfw_redo_gen_ext+0x1094
oracle`kcb_commit_main+0xc11
oracle`kcb_commit+0x107
oracle`ktiCommitCleanout+0x257
oracle`kticmt+0x64c
oracle`ktucmt+0x21f
oracle`ktcCommitTxn_new+0x402
oracle`ktcCommitTxn+0x5b
oracle`ktdcmt+0x86
oracle`k2lcom+0x99
oracle`k2send+0x514
oracle`xctctl+0x64
oracle`xctCommitTxn+0x305
oracle`kksExecuteCommand+0x4fd
oracle`opiexe+0x339a
oracle`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`ttcpip+0x5f3
oracle`opitsk+0x9ec
latch elapsed[us]: 285, on cpu[us]: 280
You have to query the database for the latch name:
select name from v$latch_children where addr = lpad('14EE2C028',16,'0');
NAME
----------------------------------------------------------------
redo copy
This is tedious, as a process typically allocates many latches. I wrote a Perl program to eliminate that toil. It enriches the trace with the latch name.
First, create the mapping file:
sqlplus -s /nolog << EOD
connect / as sysdba
set pagesize 0
set linesize 200
spool latches.map
select '0X'||ltrim(addr,'0'), name from v\$latch
union
select '0X'||ltrim(addr,'0'), name from v\$latch_children ;
EOD
The mapping file contains the address of each latch. Beware that those addresses change after a database restart.
Then, postprocess the captured trace:
process_latches_trace.pl latches.trc latches.map
Now we can see the latch name for each latch operation:
kslgetl: redo copy 0X14EE2C028 0 0x0 3925
oracle`kslgetl
oracle`kcrfw_redo_gen_ext+0x39f
oracle`kcb_commit_main+0xc11
oracle`kcb_commit+0x107
oracle`ktiCommitCleanout+0x257
oracle`kticmt+0x64c
oracle`ktucmt+0x21f
oracle`ktcCommitTxn_new+0x402
oracle`ktcCommitTxn+0x5b
oracle`ktdcmt+0x86
oracle`k2lcom+0x99
oracle`k2send+0x514
oracle`xctctl+0x64
oracle`xctCommitTxn+0x305
oracle`kksExecuteCommand+0x4fd
oracle`opiexe+0x339a
oracle`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`ttcpip+0x5f3
oracle`opitsk+0x9ec
-----------------
kslfre: redo copy 0X14EE2C028 0 0x0 5135376848
oracle`kslfre
oracle`kcrfw_redo_gen_ext+0x1094
oracle`kcb_commit_main+0xc11
oracle`kcb_commit+0x107
oracle`ktiCommitCleanout+0x257
oracle`kticmt+0x64c
oracle`ktucmt+0x21f
oracle`ktcCommitTxn_new+0x402
oracle`ktcCommitTxn+0x5b
oracle`ktdcmt+0x86
oracle`k2lcom+0x99
oracle`k2send+0x514
oracle`xctctl+0x64
oracle`xctCommitTxn+0x305
oracle`kksExecuteCommand+0x4fd
oracle`opiexe+0x339a
oracle`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`ttcpip+0x5f3
oracle`opitsk+0x9ec
latch elapsed[us]: 285, on cpu[us]: 280
Redo copy latch
Logon trace
I tested the described methodology on a latch contention problem discussed on Twitter. The root blockers were holding redo copy latches during logon while executing “update user$ spare6” – the last logon time update. There were around 50 logons per seconds.
With trace_latches.d we can measure for how long a process was holding a latch. But how do we get the process ID of a new process? We can’t query the PID, because the entries in v$session and v$process haven’t been created yet. Instead, we have to pause the dedicated process immediately after forking. I scripted the solution: stop_forked.d stops the execution in the authentication function kpoauth and prints the PID. Then you have all the time in the world to run trace_latches.d.
dtrace -ws stop_forked.d '"DB1"'
DB: DB1, pid: 4254
dtrace -s trace_latches.d -p 4254 > latches.trc
process_latches_trace.pl latches.trc latches.map
A redo copy latch was indeed held for almost 6ms:
...
kslfre: redo copy 0X14EE2FF70 0 0x0 12905805352
oracle`kslfre
oracle`kcrfw_redo_gen_ext+0x1094
oracle`kcbchg1_main+0x1b7f
oracle`kcbchg1+0xe2
oracle`ktuchg2+0x911
oracle`ktbchg2+0x113
oracle`kdu_array_flush_retry+0xc6d
oracle`kdu_array_flush1+0x50
oracle`qerupFetch+0x45c
oracle`updaul+0x678
oracle`updThreePhaseExe+0x14e
oracle`updexe+0x1ec
oracle`opiexe+0x30fc
oracle`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`kpoodrc+0x26
oracle`rpiswu2+0x2d6
oracle`kpoodr+0x2bb
oracle`upirtrc+0x1089
oracle`kpurcsc+0x6e
latch elapsed[us]: 5976, on cpu[us]: 5953
...
That’s extremely long for a latch – the elapsed time is normally measured in microseconds. The stack shows that the latch was acquired by an update. But how do we know that it is the originally reported update? We can set a conditional breakpoint when the latch is acquired and look at the active sessions. The following query generates the breakpoint command that we can copy into the gdb.
set define off
select
'b kslgetl if $rdi >= 0X'||min(addr)||' && $rdi <= 0X' || max(addr)
gdb_breakpoint
from v$latch_children where name='redo copy' ;
GDB_BREAKPOINT
---------------------------------------------------------------------
b kslgetl if $rdi >= 0X000000014EE2B530 && $rdi <= 0X000000014EE37ED8
@ashtop sql_id "program like 'sql%'" &5min
Total Distinct
Seconds AAS %This SQL_ID FIRST_SEEN LAST_SEEN Execs Seen
--------- ------- ------- ------------- ------------------- ------------------- ----------
72 .2 96% | 9zg9qd9bm4spu 2020-06-01 18:27:25 2020-06-01 18:28:37 1
@awr_sqlid 9zg9qd9bm4spu
DBID SQL_ID SQL_TEXT COMMAND_TYPE CON_DBID CON_ID
---------- ------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------ ---------- ----------
510006313 9zg9qd9bm4spu update user$ set spare6=DECODE(to_char(:2, 'YYYY-MM-DD'), '0000-00-00', to_date(NULL), :2) where user#=:1 6 510006313 0
As you can see, it’s the same statement.
Private strands
Frits Hoogland noticed the function kcrfw_redo_gen_ext on the stack and concluded that private strands haven’t been used. Since private strands aren’t used, a redo copy latch has to be allocated before copying data into the redo buffer. Interestingly, private strands are used if we run the same statement in sqlplus. In other words, private strands aren’t used only during the logon.
Inserted on October 12, 2020: The block changes are deferred until commit when private strands are being used. The Oracle C function kcrfw_redo_gen_ext is called during commit to perform block changes. A kcrfw_redo_gen_ext function call during DML indicates that the private strands were bypassed. [3]
kcbklbc
The almost entire time of holding the latch is being spent in kcbklbc. We can prove this by extending trace_latches.d as follows:
pid$target:oracle:kcbklbc:entry
{
self->start_kcbklbc = timestamp ;
}
pid$target:oracle:kcbklbc:return
/ self->start_kcbklbc /
{
printf("kcbklbc[us]: %d\n", (timestamp - self->start_kcbklbc)/1000 ) ;
ustack();
self->start_kcbklbc = 0 ;
}
The extension above measures the time spent in kcbklbc:
kcbklbc[us]: 5995
oracle`kcbklbc+0x3a1
oracle`kcbchg1_main+0x1b7f
oracle`kcbchg1+0xe2
oracle`ktuchg2+0x911
oracle`ktbchg2+0x113
oracle`kdu_array_flush_retry+0xc6d
oracle`kdu_array_flush1+0x50
oracle`qerupFetch+0x45c
oracle`updaul+0x678
oracle`updThreePhaseExe+0x14e
oracle`updexe+0x1ec
oracle`opiexe+0x30fc
oracle`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`kpoodrc+0x26
oracle`rpiswu2+0x2d6
oracle`kpoodr+0x2bb
oracle`upirtrc+0x1089
oracle`kpurcsc+0x6e
oracle`kpuexec+0x1e9a
kslfre: redo copy 0X14EE2E8A8 0 0x0 12899699752
oracle`kslfre
oracle`kcrfw_redo_gen_ext+0x1094
oracle`kcbchg1_main+0x1b7f
oracle`kcbchg1+0xe2
oracle`ktuchg2+0x911
oracle`ktbchg2+0x113
oracle`kdu_array_flush_retry+0xc6d
oracle`kdu_array_flush1+0x50
oracle`qerupFetch+0x45c
oracle`updaul+0x678
oracle`updThreePhaseExe+0x14e
oracle`updexe+0x1ec
oracle`opiexe+0x30fc
oracle`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`kpoodrc+0x26
oracle`rpiswu2+0x2d6
oracle`kpoodr+0x2bb
oracle`upirtrc+0x1089
oracle`kpurcsc+0x6e
latch elapsed[us]: 6313, on cpu[us]: 6280
The kcbklbc annotation is “kernel cache buffers checkpoint queue manipulation link buffer into ckpt queue” [2].
The function is called whenever a redo copy latch gets allocated. In most cases it runs quickly. For example, it took only 61 microseconds for a commit:
kcbklbc[us]: 61
oracle`kcbklbc+0x3a1
oracle`kcb_commit_main+0xc11
oracle`kcb_commit+0x107
a.out`ktiCommitCleanout+0x257
a.out`kticmt+0x64c
a.out`ktucmt+0x21f
a.out`ktcCommitTxn_new+0x402
a.out`ktcCommitTxn+0x5b
a.out`ktdcmt+0x86
a.out`k2lcom+0x99
a.out`k2send+0x514
a.out`xctctl+0x64
a.out`xctCommitTxn+0x305
a.out`kksExecuteCommand+0x4fd
oracle`opiexe+0x339a
a.out`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`ttcpip+0x5f3
oracle`opitsk+0x9ec
oracle`opiino+0x4a8
kslfre: redo copy 0X14EE2DC00 0 0x0 5135376848
oracle`kslfre
oracle`kcrfw_redo_gen_ext+0x1094
oracle`kcb_commit_main+0xc11
oracle`kcb_commit+0x107
a.out`ktiCommitCleanout+0x257
a.out`kticmt+0x64c
a.out`ktucmt+0x21f
a.out`ktcCommitTxn_new+0x402
a.out`ktcCommitTxn+0x5b
a.out`ktdcmt+0x86
a.out`k2lcom+0x99
a.out`k2send+0x514
a.out`xctctl+0x64
a.out`xctCommitTxn+0x305
a.out`kksExecuteCommand+0x4fd
oracle`opiexe+0x339a
a.out`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`ttcpip+0x5f3
oracle`opitsk+0x9ec
latch elapsed[us]: 192, on cpu[us]: 161
The logon update seems like a special case. The execution branches to a code path taking up much more time. Interestingly, the execution doesn’t take nearly that long after a database restart:
kcbklbc[us]: 14
oracle`kcbklbc+0x3a1
oracle`kcbchg1_main+0x1b7f
oracle`kcbchg1+0xe2
oracle`ktuchg2+0x911
oracle`ktbchg2+0x113
oracle`kdu_array_flush_retry+0xc6d
oracle`kdu_array_flush1+0x50
oracle`qerupFetch+0x45c
oracle`updaul+0x678
oracle`updThreePhaseExe+0x14e
oracle`updexe+0x1ec
oracle`opiexe+0x30fc
oracle`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`kpoodrc+0x26
oracle`rpiswu2+0x2d6
oracle`kpoodr+0x2bb
oracle`upirtrc+0x1089
oracle`kpurcsc+0x6e
oracle`kpuexec+0x1e9a
But after a couple of minutes the code for special handling gets executed again. I still haven’t figured out what that special handling is, and what triggers it.
Summary
I wrote a DTrace script for measuring for how long each latch was being held. The script also provides the context by printing the stack when a latch is acquired or released. I’m not aware of a database trace that shows similar information.
update user$ during logon doesn’t use private strands. Consequently, it allocates a redo copy latch and can hold it for several milliseconds. That’s an extremely long time to occupy a latch. The most of that time the process is being busy with manipulating the checkpoint queues. That can cause serious wait cascades during logon storms. The best protection is a proper connection pooling.
Updates
June 8th, 2020
Recursive DML
Jonathan Lewis pointed out that a DML execution under SYS on behalf on another user as a part of the internal code generally bypasses private strands. It’s easy to show that also insert aud$ copies the data directly into the public redo buffer:
oracle`kcrfw_redo_gen_ext
oracle`kcbchg1_main+0x1b7f
oracle`kcbchg1+0xe2
oracle`ktuchg2+0x911
oracle`ktbchg2+0x113
oracle`kdtchg+0x477
oracle`kdtwrp+0xac3
oracle`kdtInsRow+0x389
oracle`qerltcNoKdtBufferedInsRowCBK+0x16d
oracle`qerltcSingleRowLoad+0x137
oracle`qerltcFetch+0x1dc
oracle`insexe+0x33e
oracle`opiexe+0x1d8d
oracle`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`kpoodrc+0x26
oracle`rpiswu2+0x2d6
oracle`kpoodr+0x2bb
oracle`upirtrc+0x1089
oracle`kpurcsc+0x6e
kcbklbc
kcbklbc runs for long only when executed during update user$.
kcbklbc[us]: 44
oracle`kcbklbc+0x3a1
oracle`kcbchg1_main+0x1b7f
oracle`kcbchg1+0xe2
oracle`ktuchg2+0x911
oracle`ktbchg2+0x113
oracle`kdtchg+0x477
oracle`kdtwrp+0xac3
oracle`kdtInsRow+0x389
oracle`qerltcNoKdtBufferedInsRowCBK+0x16d
oracle`qerltcSingleRowLoad+0x137
oracle`qerltcFetch+0x1dc
oracle`insexe+0x33e
oracle`opiexe+0x1d8d
oracle`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`kpoodrc+0x26
oracle`rpiswu2+0x2d6
oracle`kpoodr+0x2bb
oracle`upirtrc+0x1089
oracle`kpurcsc+0x6e
kcbklbc[us]: 2
oracle`kcbklbc+0x3a1
oracle`kcbchg1_main+0x1b7f
oracle`kcbchg1+0xe2
oracle`ktucmt+0xcfe
oracle`ktcCommitTxn_new+0x402
oracle`ktcCommitTxn+0x5b
oracle`audend+0x79
oracle`auddft_internal+0xf6f
oracle`audStatement+0x951
oracle`kpolnb+0x3ef0
oracle`kpoauth+0x419
oracle`opiodr+0x439
oracle`ttcpip+0x5f3
oracle`opitsk+0x9ec
oracle`opiino+0x4a8
oracle`opiodr+0x439
oracle`opidrv+0x33f
oracle`sou2o+0x97
oracle`opimai_real+0x121
oracle`ssthrdmain+0x26b
kcbklbc[us]: 5011
oracle`kcbklbc+0x3a1
oracle`kcbchg1_main+0x1b7f
oracle`kcbchg1+0xe2
oracle`ktuchg2+0x911
oracle`ktbchg2+0x113
oracle`kdu_array_flush_retry+0xc6d
oracle`kdu_array_flush1+0x50
oracle`qerupFetch+0x45c
oracle`updaul+0x678
oracle`updThreePhaseExe+0x14e
oracle`updexe+0x1ec
oracle`opiexe+0x30fc
oracle`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`kpoodrc+0x26
oracle`rpiswu2+0x2d6
oracle`kpoodr+0x2bb
oracle`upirtrc+0x1089
oracle`kpurcsc+0x6e
oracle`kpuexec+0x1e9a
However, if fast_start_mttr_target=0 kcbklbc always runs fast.
October 12th, 2020
kcrfw_redo_gen_ext
Thanks to Frits Hoogland for the additional explanation on kcrfw_redo_gen_ext.
MTTR Advisor
I investigated the relevant MTTR advisor internals in the subsequent blog post.