We occasionally got the following error message when attempting to start the databases after a hardware change or during a database upgrade:
ORA-00821: Specified value of sga_target 4096M is too small, needs to be at least 4352M
We immediately resolved the issue by increasing sga_target, but couldn’t explain the root cause.
Obviously, the hardware configuration has an impact on the minimum sga_target calculation, and the limit usually changes with a new database release.
Unfortunately, this calculation isn’t documented. And yet it would be extremely useful to anticipate the new limit and avoid unpleasant suprises.
I asked for information on Oracle-l.
Jonathan Lewis said that the calculation depends on the database parameter cpu_count. Sayan Malakshinov mentioned the processes parameter.
I setup an experiment to measure how the minimum sga_sarget depends on both values.
Experiment
After backing up the spfile, I was repeating the following process with different parameter values:
- set sga_target to 100M
- adjust the parameter of interest
- stop the database
- try to start the database (it will fail.)
- get the new limit from the error message
- restore the correct spfile
I semi-automated the testing with the code snippet below:
Backup spfile (only once):
cp spfileDB.ora spfileDB.ora.bak
Change cpu_count (or any other parameter) with every execution:
alter system set cpu_count=192 scope=spfile ;
alter system set sga_target=100M scope=spfile ;
shutdown abort
startup
! cp spfileDB.ora.bak spfileDB.ora
shutdown abort
startup
Model
After reducing cpu_count from 120 to 60, the minimum sga_target approximately halved.
In contrast, the parameter processes had a neglectable impact on the requirement.
Since the dependency on cpu_count is linear, and cpu_count is the main contributor, we can approximate all other dependencies with a constant C:
min_sga_target = k * cpu_count + C
After completing experiments with two different cpu_count values we can find k (slope) and C (y intercept) by solving the system of two linear equations, for example:
cpu_count = 120, min_sga_target = 2576M
cpu_count = 60, min_sga_target = 1348M
k = 20.47M , C = 120M
We can verify this calculation by picking a third value, calculating it and comparing with reality:
cpu_count = 30, min_sga_target = 734M
ORA-00821: Specified value of sga_target 100M is too small, needs to be at least 764M
As we can see, the calculation is reliable. We can use it to predict the new minimum sga_target after adding more CPUs.
Bounds
The extrapolation works for the following cpu_count values:
1 << cpu_count < #cpu
In other words, cpu_count has to be siginficantly greater than 1 and lower or equal than the number of CPU threads on the server. Simply put, we’re looking at large servers.
Lower bound
Fow a low cpu_count the dependency becomes non-linear, as there are some minimal requirements for sga_target, for example for cpu_count=4
ORA-00821: Specified value of sga_target 100M is too small, needs to be at least 296M
But that’s irrelevant, because the theoretical value for minimum sga_target for a low cpu_count is so small that it’s unfit for the real world.
Upper bound
Minimum sga_target doesn’t grow anymore after increasing cpu_count above the total number of CPUs.
For example, for cpu_count=192
ORA-00821: Specified value of sga_target 100M is too small, needs to be at least 2576M
Risks
Hardware and OS changes
Adding more CPUs to the server increases the minimum sga_target requirement, which can prevent the restart of the databases that suddenly come below the new limit.
Notice that the database sees hyperthreads as CPUs, so adding virtual CPUs or turning on hyperthreading can also lead to the problem.
Or if your test server has less CPUs, some problems might only appear in production.
Oracle upgrades
Here are slope and y intercept values for 19.10.0.0.210119 in MB: k = 20.46 , C = 120
Let’s compare them with 12.2.0.1.180116: k = 8.47, C = 124
The minimum sga_target is 2.4 time higher in 19c than in 12c.
Monitoring
After becoming aware of the risk, we started monitoring sga_target with the following Perl code:
sub get_min_sga_target {
my ($self) = shift @_ ;
my $cpu_count = $self->get_spfile_parameter('cpu_count') ;
return $cpu_count * 21 * 1024 ** 2 + 1024 ** 3 ;
}
use Number::Format qw( format_bytes ) ;
sub exec_check{
my $db_obj = shift @_ ;
my $sga_target = $db_obj->get_spfile_parameter('sga_target') ;
my $min_sga_target = $db_obj->get_min_sga_target() ;
my $return_code = ( $sga_target < $min_sga_target ) ? 1 : 0 ;
$sga_target = format_bytes( $sga_target, unit => 'G' ) ;
$min_sga_target = format_bytes( $min_sga_target, unit => 'G' ) ;
my $message
= "sga_target: $sga_target, minimum: $min_sga_target" ;
return { return_code => $return_code , message => $message } ;
}
Conclusion
In conclusion, minimum sga_target largely depends on cpu_count. Consequently, database start might fail after adding CPUs. This includes virtual CPUs and hyperthreads as well.
The limit changes with every database release, so an upgrade can fail for databases with a smaller SGA. Also, a database upgrade that you tested on a server with much less CPUs might fail in production.
I provided the methodology to anticipate problems and monitor critical conditions.
Great article und investigations about sga. congratds
Thank you.
Hvala ti Nenade, ovo mi je bas pomoglo.
I need to make a script which starts a dummy instance for a couple of minutes to convert pluggables, and this approximative solution gives me enough support to continue (of course, I shall set the parameters k and C on a higher value, to be on the safe side 🙂
Slobodane, drago mi čuti je da je informacija bila korisna. Hvala na pozitivnom feedbacku!