Incremental statistics accelerate the gathering of global statistics. Initially, the optimizer creates a synopsis for each partition, which is a heavy operation. But that’s a good investment. The database will reuse this information many times to derive global statistics without having to read all the partitions.
There’s a non-obvious pitfall related to histograms – new histograms invalidate synopses. I’ll describe the solution I proposed to our data warehouse application team for our specific case.
The old way
I’ll create a table consisting of two paritions:
drop table t ;
create table t ( id1 integer, id2 integer ) partition by range (id1)
(
partition t_id_1 values less than (2) ,
partition t_id_2 values less than (3)
);
I’ll fill one partition and gather statistics:
insert into t select 1, rownum from dual connect by level < 1e6 ;
commit ;
exec dbms_stats.gather_table_stats(null, 'T');
No partitions are stale:
column partition_name format a6
column stale_stats format a5
select partition_name, stale_stats, last_analyzed
from user_tab_statistics
where table_name = 'T' ;
PARTIT STALE LAST_ANALYZED
------ ----- -------------------
NO 2020-11-16 19:51:52
T_ID_1 NO 2020-11-16 19:51:52
T_ID_2 NO 2020-11-16 19:51:52
This changes after inserting a single row into the empty partition:
insert into t values(2,1);
commit;
PARTIT STALE LAST_ANALYZED
------ ----- -------------------
NO 2020-11-16 19:51:52
T_ID_2 YES 2020-11-16 19:51:52
T_ID_1 NO 2020-11-16 19:51:52
Pay attention to LAST_ANALYZED after gathering statistics:
exec dbms_stats.gather_table_stats(null, 'T');
PARTIT STALE LAST_ANALYZED
------ ----- -------------------
NO 2020-11-16 19:52:43
T_ID_1 NO 2020-11-16 19:52:43
T_ID_2 NO 2020-11-16 19:52:43
Oracle recalculated the statistics of the other partition too, even though nothing has changed here. This was necessary, because there is no way of deriving the new number of distinct values for the whole table by just looking at one partition. This was the old, inefficient way of gathering global statistics.
Incremental statistics
With incremental statistics, Oracle initially creates a synopsis for each partition [1]. It uses this information for calculating global statistics without having to read every partition.
This feature has to be explicitly enabled:
EXEC dbms_stats.set_table_prefs(null,'T','INCREMENTAL','TRUE') ;
SELECT dbms_stats.get_prefs(pname=>'INCREMENTAL', tabname=>'T') FROM dual;
TRUE
exec dbms_stats.gather_table_stats(null, 'T');
Now I’ll insert a row into a small partition.
insert into t values (2,1);
commit ;
It became stale again:
PARTIT STALE LAST_ANALYZED
------ ----- -------------------
NO 2020-11-16 19:53:16
T_ID_2 YES 2020-11-16 19:53:16
T_ID_1 NO 2020-11-16 19:53:16
We can see that only the stale partition was calculated:
exec dbms_stats.gather_table_stats(null, 'T');
PARTIT STALE LAST_ANALYZED
------ ----- -------------------
NO 2020-11-16 19:54:05
T_ID_1 NO 2020-11-16 19:53:16
T_ID_2 NO 2020-11-16 19:54:05
Then it used the synopsis of the other partition to calculate global statistics. Just great!
Histograms
My table doesn’t have any histograms so far:
column column_name format a3
column histogram format a6
select column_name, last_analyzed, histogram
from user_tab_cols
where table_name = 'T' ;
COL LAST_ANALYZED HIST
--- ------------------- ----
ID1 2020-11-16 19:54:05 NONE
ID2 2020-11-16 19:54:05 NONE
I’ll execute a query with a filter on id2 to make Oracle create a histogram next time it gathers statistics:
select * from t where id2 = 1 ;
exec dbms_stats.gather_table_stats(null, 'T');
COL LAST_ANALYZED HISTOG
--- ------------------- ------
ID1 2020-11-16 19:57:03 NONE
ID2 2020-11-16 19:57:03 HYBRID
In doing so, it had to recreate all the synopses containing the additional information from histograms.
PARTIT STALE LAST_ANALYZED
------ ----- -------------------
NO 2020-11-16 19:57:03
T_ID_1 NO 2020-11-16 19:57:02
T_ID_2 NO 2020-11-16 19:57:02
This behavior is understandable, but not immediately obvious.
Depending on the database usage, Oracle might create additional histograms to support your queries, which, in turn, will invalidate existing synopses. The duration of the next statistic job can skyrocket in a such case.
Use case
Our use case is as follows: A data warehouse loads data during the night. It gathers stale statistics several times. The processing is waiting on the statistic job to finish, so that the execution plans are generated with up-to-date statistics.
With incremental statistics the elapsed times of the statistic job are OK. But if the synopses has to be recreated, the duration becomes unacceptable. Since users run ad hoc queries which give rise to new histograms, it’s impossible to predict how long the statistic job is going to run.
Most tables consist of many monthly partitions, but nightly batch jobs mostly process data only from the current partition. I analyzed the workload and came to conclusion that the current partition statistics are most relevant for the performance of the night jobs. The current partitions are mainly becoming stale in the first week, because the percentage of the inserted rows is the largest in the beginning of the month. For this reason, it’s essential to get the current partition statistics right.
So I proposed the application to handle statistics as follows:
- Lock the statistics of the large partitioned tables
- The night job should gather only current partition statistics of these table if it becomes stale after the load.
- Stale global statistics of these tables should be gathered outside of the night processing (i.e., nobody has to wait for the job to finish, so the duration isn’t critical).
The demo follows.
Lock the table:
exec dbms_stats.lock_table_stats(null, 'T');
Cause stale partition statistics:
insert into t values (2,1);
commit ;
PARTIT STALE LAST_ANALYZED
------ ----- -------------------
NO 2020-11-16 19:57:03
T_ID_2 YES 2020-11-16 19:57:02
T_ID_1 NO 2020-11-16 19:57:02
The gather schema stale statistics bypasses the locked table:
exec dbms_stats.gather_schema_stats( null, options => 'GATHER STALE')
PARTIT STALE LAST_ANALYZED
------ ----- -------------------
NO 2020-11-16 19:57:03
T_ID_2 YES 2020-11-16 19:57:02
T_ID_1 NO 2020-11-16 19:57:02
Gather statistics of the current partition:
begin
dbms_stats.gather_table_stats(
null, 'T', partname => 'T_ID_2', granularity => 'PARTITION', force => true
);
end;
/
PARTIT STALE LAST_ANALYZED
------ ----- -------------------
NO 2020-11-16 19:57:03
T_ID_1 NO 2020-11-16 19:57:02
T_ID_2 NO 2020-11-16 20:10:37
As you can see, only the statistic of the current partition was gathered. The global statistics were updated based on synopses.
Disadvantages to keep an eye on are: Global statistics will be updated and histograms created only when the global statistics become stale.
Diagnostic
How can you recognize this problem on a live database?
The following query shows updated partition statistics per day:
select trunc(stats_update_time),count(*)
from dba_tab_stats_history where table_name like 'table'
group by trunc(stats_update_time)
order by trunc(stats_update_time) desc ;
TRUNC(STATS_UPDATE_ COUNT(*)
------------------- ----------
2020-11-15 00:00:00 1
2020-11-14 00:00:00 152
2020-11-13 00:00:00 1
2020-11-12 00:00:00 1
2020-11-11 00:00:00 1
2020-11-10 00:00:00 1
2020-11-08 00:00:00 162
2020-11-07 00:00:00 1
The synopses were recalculated on November 14.
The diff table stats report shows that a histogram for a columns group statistics was created:
SELECT *
FROM table(dbms_stats.diff_table_stats_in_history(
ownname => 'USER',
tabname => 'TABLE',
time1 => localtimestamp,
time2 => localtimestamp-to_dsinterval('5 00:00:15'),
pctthreshold => 0
));
...
SYS_STS#NLAN$17 A 1812882 .000001 YES 0 12 C72E1 CA132 5529
B 1812882 .000000551 NO 0 12 C72E1 CA132 1812882
Summary
In summary, incremental statistics is a great way to gather statistics more efficiently on large partitioned tables. However, certain usage patterns are conducive to creation of new histograms, which will invalidate the existing partition statistics. As a consequence, they have to be recalculated, which is time consuming. To prevent this, you might consider locking the statistics of a such table and set up separate jobs for gathering partition and global statistics.
References
[1] Nigel Bayliss, Efficient Statistics Maintenance for Partitioned Tables Using Incremental Statistics – Part 1. December 16, 2016.