Oracle database is designed to perform well under high concurrent load.
With flashback data archive (FBDA), though, you might hit some limitations. In particular, the feature can delay the execution of DDL commands.
I’ll be demonstrating it on two empty tables with the FBDA configuration (tested on 19.10.0.0.210119):
create flashback archive default fba tablespace users retention 1 year ;
create table t1 ( n number ) ;
create table t2 ( n number ) ;
alter table t1 flashback archive ;
alter table t2 flashback archive ;
WARNING: In this demo, I’ll be using debugger for changing memory locations and stopping the program execution to figure out the internal functioning of the feature. Such manipulations are unsafe and unsupported, and can corrupt the database. Therefore, never do them outside your private lab environment.
Oracle FBDA C functions
The script fbda.d traces the execution of the FBDA background process:
#pragma D option quiet
pid$target:oracle:ktfaundoar_main:entry,
pid$target:oracle:ktfaundoar_main:return
{
printf("%Y %s %s\n", walltimestamp, probefunc, probename );
}
pid$target:oracle:ktfaundoar_main:return
{
printf("\n");
}
pid$target:oracle:ktfa_partition_fba_tables:entry
{
printf("%Y \n", walltimestamp );
ustack() ;
}
In particular, it focuses on the following two functions: ktfaundoar_main and ktfa_partition_fba_tables. ktfaundoar_main calls other functions when there’s work to do. Among others, it periodically calls ktfa_partition_fba_tables, which creates new partitions on FBDA tables as they grow.
ps -ef | grep fbda | grep DB
1000 7584 13824 0 16:10:15 ? 0:00 ora_fbda_DB
dtrace -s fbda.d -p 7584
2021 Jul 8 18:34:27 ktfaundoar_main entry
2021 Jul 8 18:34:27 ktfaundoar_main return
2021 Jul 8 18:39:27 ktfaundoar_main entry
2021 Jul 8 18:39:27 ktfaundoar_main return
2021 Jul 8 18:44:27 ktfaundoar_main entry
2021 Jul 8 18:44:27 ktfaundoar_main return
As we can see, the FBDA process calls ktfaundoar_main every 5 minutes.
Interprocess communication
Between two consecutive calls, the FBDA process is waiting on a message:
pstack 7584
7584: ora_fbda_DB
00007ff75427741a portfs (5, 4, 7fffbfff8200, 3, 0, 7fffbfff8220)
00000000062a650d sskgpwwait () + dd
00000000062a60f3 skgpwwait () + c3
00000000064a984d ksliwat () + 9ad
00000000064a8a3f kslwaitctx () + af
000000000a7f618d ktfaundoar () + 86d
00000000073e2473 ksbrdp () + 543
0000000006fe0399 opirip () + 2b9
00000000061f0cd0 opidrv () + 160
00000000061f0ae7 sou2o () + 97
00000000061f08b4 opimai_real () + a4
00000000061f040b ssthrdmain () + 26b
00000000061f0179 main () + a9
0000000007868c34 ???????? ()
This waiting is instrumented by the “fbar timer” wait event:
column program format a25
column event format a10
SQL> select program,event,seconds_in_wait from v$session where program like '%FBDA%' ;
PROGRAM EVENT SECONDS_IN_WAIT
------------------------- ---------- ---------------
oracle@svdbp01i (FBDA) fbar timer 112
The “fbar timer” wait event is an idle wait, i.e. not contributing to the total database time. Simply put, the FBDA process isn’t doing any work during this time – it’s just waiting on a message without blocking anyone.
A process performing DDL on a table with configured FBDA will ask the FBDA process to do some work on its behalf, for example:
truncate t1
The FBDA process immediately exits the waiting state to perform the work – we can see several consecutive calls to ktfaundoar_main:
2021 Jul 8 17:27:45 ktfaundoar_main entry
2021 Jul 8 17:27:46 ktfaundoar_main return
2021 Jul 8 17:27:47 ktfaundoar_main entry
2021 Jul 8 17:27:47 ktfaundoar_main return
2021 Jul 8 17:27:48 ktfaundoar_main entry
2021 Jul 8 17:27:48 ktfaundoar_main return
2021 Jul 8 17:27:49 ktfaundoar_main entry
2021 Jul 8 17:27:49 ktfaundoar_main return
2021 Jul 8 17:27:50 ktfaundoar_main entry
2021 Jul 8 17:27:50 ktfaundoar_main return
2021 Jul 8 17:27:51 ktfaundoar_main entry
2021 Jul 8 17:27:51 ktfaundoar_main return
In addition, I configured SQL trace on FBDA:
select sid,serial# from v$session where program like '%FBDA%' ;
SID SERIAL#
---------- ----------
289 53053
exec dbms_system.set_ev(si=>289, se=>53053, ev=>10046, le=>8, nm=>'');
The FBDA process executed some SQL statements, like the following one:
update SYS_FBA_BARRIERSCN set BARRIERSCN=:1, ACTIVESCN=:2, STATUS=:3 where INST_ID=:4
I used the update.d script to find out Oracle C functions underpinning the update.
#pragma D option quiet
pid$target:oracle:*upd*:entry
{
ustack(200) ;
}
dtrace -s update.d -p 7584
oracle`kokbntupd
oracle`kauupd+0x348
oracle`updrow+0x7e7
oracle`qerupUpdRow+0x369
oracle`qerupRopRowsets+0x111
oracle`qerstRowP+0x587
oracle`kdstf110110100001000km+0xe91
oracle`kdsttgr+0x757
oracle`qertbFetch+0x542
oracle`qerstFetch+0x542
oracle`qerupFetch+0x3e6
oracle`qerstFetch+0x5c0
oracle`updaul+0x676
oracle`updThreePhaseExe+0x14a
oracle`updexe+0x1ec
oracle`opiexe+0x3182
oracle`kpoal8+0xb5a
oracle`opiodr+0x439
oracle`kpoodrc+0x26
oracle`rpiswu2+0x2d6
oracle`kpoodr+0x2bb
oracle`upirtrc+0xf7b
oracle`kpurcsc+0x6e
oracle`kpuexec+0x1e8e
oracle`OCIStmtExecute+0x29
oracle`ktfaubscn+0x506
oracle`ktfaarchiver1+0x10b0
oracle`ktfaarchiver+0x529
oracle`ktfaundoar_main+0x2c8
oracle`ktfaundoar+0x8c5
oracle`ksbrdp+0x543
oracle`opirip+0x2b9
oracle`opidrv+0x160
oracle`sou2o+0x97
oracle`opimai_real+0xa4
oracle`ssthrdmain+0x26b
oracle`main+0xa9
Why’s that interesting?
Because if we set the breakpoint in any of those functions, we can simulate a performance problem with the FBDA process and observe how the sessions are behaving during that time.
Serialization
After attaching the debugger to the FBDA process and setting a breakpoint on the updrow function, I’ll truncate the T1:
attach 7584
b updrow
truncate table t1 ;
The FBDA process hit the breakpoint, and truncate table is hanging as long as the FBDA process execution is paused.
In this state, I’ll start the truncate on another table, which will also hang:
truncate table t2 ;
Instrumentation gap
Now we can observe the sessions in V$SESSION:
column program format a25
column event format a10
select program, state, event, seconds_in_wait
from v$session se
where sid != userenv('SID') and
( program like 'sqlplus%' or program like '%FBDA%' ) ;
PROGRAM STATE EVENT SECONDS_IN_WAIT
------------------------- ------------------- ---------- ---------------
ACTION
----------------------------------------------------------------
sqlplus@svdbp01i (TNS V1- WAITING fbar timer 57
V3)
sqlplus@svdbp01i (TNS V1- WAITING fbar timer 451
V3)
oracle@svdbp01i (FBDA) WAITED KNOWN TIME fbar timer 483
Both sessions executing truncate are waiting on the “fbar timer” wait event, i.e. they are waiting on a message from the FBDA process. The FBDA process will send this message after it completes its part.
But notice the ambiguity of the “fbar timer” wait event. On the one hand, the FBDA process is waiting on “fbar timer” when it doesn’t have anything else to do, so it’s an idle event. On the other hand, the user session is in the same wait while waiting on the FBDA process to complete. Since the DDL statement doesn’t complete before the user process receives the message from the FBDA process, it shouldn’t be classified as “idle” in this partucular case.
Consequently, ASH doesn’t capture the user session waiting on a message from FBDA to complete:
@ashtop session_id,program2,event2 1=1 &5min
Total Distinct
Seconds AAS %This SESSION_ID PROGRAM2 EVENT2 FIRST_SEEN LAST_SEEN Execs Seen
--------- ------- ------- ---------- ---------------------------------------- ------------------------------------------ ------------------- ------------------- ----------
300 1.0 97% | 289 (FBDA) ON CPU 2021-07-08 18:23:44 2021-07-08 18:28:43 1
5 .0 2% | 53 (DBRM) ON CPU 2021-07-08 18:24:16 2021-07-08 18:28:41 1
In other words, the DBA won’t be able to analyze slow truncates retrospectively.
I covered this instrumentation gap in a previous blog post and also provided workarounds.
So far, I demonstrated how a performance problem in the FBDA process can slow down DDL operations. I also showed the blind spot in the Oracle database instrumentation.
FBDA maintenance
With debugger, I simulated a performance problem on the update statement, which is less likely to happen in the real world. But during the maintenance window, the FBDA process executes much heavier statements, like split partition on large FBDA tables:
select floor(sum(BYTES/1024/1024)) from DBA_SEGMENTS where SEGMENT_NAME='SYS_FBA_HIST_4925624' and OWNER='OWNER' and PARTITION_NAME='HIGH_PART'
select sum(extents) from DBA_SEGMENTS where SEGMENT_NAME='SYS_FBA_HIST_4925624' and OWNER='OWNER' and PARTITION_NAME='HIGH_PART'
alter table "OWNER".SYS_FBA_HIST_4925624 split partition high_part at (74291633007) into (partition part_74291633007, partition high_part)
This is likely to cause a concurrency problem with DDL statements coming from user processes.
This maintenance code runs once per day at 2 AM. That schedule is inconvenient for experimenting except for night owls.
Unfortunately, there isn’t a known way to shorten the interval or adjust the starting time. But I figured out a hack (for lab environments only).
The interval is stored in the memory chunk marked as “ktfa dynamic SG”:
select KSMCHPTR, KSMCHSIZ from
x$ksmsp
where ksmchcom = 'ktfa dynamic SG' ;
KSMCHPTR KSMCHSIZ
---------------- ----------
000000009091FDB8 648
With debugger, we can search for the value 86400 (86400 seconds = 1 day) in this chunk:
(gdb) x/648d 0x000000009091FDB8
...
0x90920018: 86400 0 0 0
In our case, the interval value is stored in the location 0x90920018;
(gdb) x/d 0x90920018
0x90920018: 86400
After storing a low (unsigned integer) value in this address location, ktfaundoar_main will start executing the maintenance on each run, i.e. every 5 minutes, instead of once per day:
(gdb) set *0x90920018=1
(gdb) x/d 0x90920018
0x90920018: 1
2021 Jul 8 18:54:27 ktfaundoar_main entry
2021 Jul 8 18:54:27
oracle`ktfa_partition_fba_tables
oracle`ktfa_traverse_tracked_tables+0x361
oracle`ktfa_do_scheduled_tasks_pdb+0xbd
oracle`ktfa_do_scheduled_tasks+0x24
oracle`ktfaundoar_main+0xa6d
oracle`ktfaundoar+0x8c5
oracle`ksbrdp+0x543
oracle`opirip+0x2b9
oracle`opidrv+0x160
oracle`sou2o+0x97
oracle`opimai_real+0xa4
oracle`ssthrdmain+0x26b
oracle`main+0xa9
a.out`0x7868c34
Now we can set a breakpoint somewhere in the maintenance procedures to emulate a performance problem:
(gdb) b ktfa_partition_fba_tables
Thread 2 hit Breakpoint 1, 0x000000000a87f520 in ktfa_partition_fba_tables ()
After hitting the first breakpoint, we’ll set another breakpoint at fetching:
(gdb) b *qertbFetch
Note: breakpoint 2 (disabled) also set at pc 0x67e9980.
Breakpoint 3 at 0x67e9980
(gdb) c
Continuing.
Thread 2 hit Breakpoint 3, 0x00000000067e9980 in qertbFetch ()
Now we can see what the FBDA process is currently executing:
column program format a25
column event format a10
select program, state, event, seconds_in_wait, sql_text
from v$session se, v$sqlarea sq
where sid != userenv('SID') and
program like '%FBDA%' and sq.address = se.sql_address ;
PROGRAM STATE EVENT SECONDS_IN_WAIT
------------------------- ------------------- ---------- ---------------
SQL_TEXT
--------------------------------------------------------------------------------
oracle@svdbp01i (FBDA) WAITED SHORT TIME fbar timer 232
select OWNERNAME, OBJNAME from SYS_FBA_TRACKEDTABLES where OBJ#=52340
It’s currently processing the T1 table:
select object_name from dba_objects where object_id=52340 ;
OBJECT_NAME
--------------------------------------------------------------------------------
T1
Next, I’ll truncate a completely different table, currently not being processed by the FBDA process. The truncate of T2 will wait in fbar timer, even though the FBDA is currently processing the T1 table:
truncate table t2 ;
set pagesize 100
column program format a25
column event format a10
select program, state, event, seconds_in_wait, se.action
from v$session se
where sid != userenv('SID') and
( program like 'sqlplus%' or program like '%FBDA%' ) ;
PROGRAM STATE EVENT SECONDS_IN_WAIT
------------------------- ------------------- ---------- ---------------
ACTION
----------------------------------------------------------------
oracle@svdbp01i (FBDA) WAITED SHORT TIME fbar timer 379
sqlplus@svdbp01i (TNS V1- WAITING fbar timer 65
V3)
Truncate completed immediately after the FBDA process had continued the execution:
(gdb) detach
Detaching from program: /u00/oracle/orabase/product/19.10.0.0.210119_b/bin/oracle, process 7584
[Inferior 1 (process 7584) detached]
Table truncated.
Conclusion
The FBDA background process is in the critical path of every DDL statement executed on a table with configured FBDA. Simply put, FBDA has to execute something on behalf of the user session before the DDL statement completes. The user process sends a message to the FBDA process when the action is required and then waits on the message from the FBDA process, which will arrive only after FBDA completes its part.
While being busy with serving the current request, the FBDA process is not able to accept requests from any other process. In other words, it’s a serialization point for DDLs.
It doesn’t even need two user sessions to suffer this serialization. It can also happen while the FBDA process is being busy with the maintenance work on FBDA tables, like splitting partitions.
The serialization problem is not visible in retrospect, because it isn’t properly instrumented. So if you weren’t aware of these undocumented mechanisms, you would be clueless when users start complaining about slow truncates, because you wouldn’t find any ASH samples for it.
What’s exactly the problem with the instrumentation?
Both the FBDA background process and the user process use wait event “fbar timer” when waiting on each other. While in the case of the FBDA process, this is an idle event, i.e. not contributing to the total database time, in the case of the user process, it should definitely count as the database time, because the statement doesn’t complete until the process receives the message from the FBDA process.
Since in both cases “fbar timer” is treated as an idle event, the sessions waiting on it will neither be captured in ASH nor contribute to the total database time.
You need to build your own instrumentation for overcoming this problem.
Hi Nenad,
Your blog is really azaming , your way of analysing is just at top,
Regards,
Krishna
Hi Krishna,
Thank you for the encouraging words.
Best regards,
Nenad
Hi Nenad,
Thank you for your deep-dive analysis. Much appreciated.
Best regards, cama
Hi Cama,
You’re welcome.
Best regards, Nenad
Hi Nenad
So how do you recommend to overcome this problem?
The only way we found is to disable FBDA before and enable after DDL is completed, but this is not an optimal solution.
Hi Evgenia,
There’s no perfect solution, I’m afraid. I’d remove DDLs out of the critical path and execute them at some non-critical time, if that’s possible.
In my case, developers configured FBDA on some volatile tables and were executing truncates before filling them again. After seeing the performance impact, they just switched FBDA off.
In addition to mis-classifying the event as “idle”, the object being waited on is not recorded either. So we can not diagnose the cause of multi-hour “fbar timer” waits…
Actually, “fbar timer” is an ambivalent wait event. An idle FBDA background process waits in “fbar timer”. But for a foreground process it indicates a concurrency problem and, therefore, shouldn’t be classified as “idle”. It would be better to introduce a separate non-idle wait event for the latter case and enrich it with the information about the blocker. Otherwise, we have a serious instrumentation gap.