summaryrefslogtreecommitdiff
path: root/src
diff options
context:
space:
mode:
authorTom Lane <tgl@sss.pgh.pa.us>2011-11-19 00:35:29 -0500
committerTom Lane <tgl@sss.pgh.pa.us>2011-11-19 00:36:59 -0500
commitfdaff0ba1e7d7c38484f1c6b426230d42fbc63e6 (patch)
treec69eb555b8245619ad0901b3ec102cd937f6e0e2 /src
parent692ca693b90667fbbf25b8e8fc99c120543df114 (diff)
Avoid floating-point underflow while tracking buffer allocation rate.
When the system is idle for awhile after activity, the "smoothed_alloc" state variable in BgBufferSync converges slowly to zero. With standard IEEE float arithmetic this results in several iterations with denormalized values, which causes kernel traps and annoying log messages on some poorly-designed platforms. There's no real need to track such small values of smoothed_alloc, so we can prevent the kernel traps by forcing it to zero as soon as it's too small to be interesting for our purposes. This issue is purely cosmetic, since the iterations don't happen fast enough for the kernel traps to pose any meaningful performance problem, but still it seems worth shutting up the log messages. The kernel log messages were previously reported by a number of people, but kudos to Greg Matthews for tracking down exactly where they were coming from.
Diffstat (limited to 'src')
-rw-r--r--src/backend/storage/buffer/bufmgr.c13
1 files changed, 12 insertions, 1 deletions
diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c
index 6f7a436fb3c..39e3cb6e267 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -1286,7 +1286,18 @@ BgBufferSync(void)
smoothing_samples;
/* Scale the estimate by a GUC to allow more aggressive tuning. */
- upcoming_alloc_est = smoothed_alloc * bgwriter_lru_multiplier;
+ upcoming_alloc_est = (int) (smoothed_alloc * bgwriter_lru_multiplier);
+
+ /*
+ * If recent_alloc remains at zero for many cycles, smoothed_alloc will
+ * eventually underflow to zero, and the underflows produce annoying
+ * kernel warnings on some platforms. Once upcoming_alloc_est has gone
+ * to zero, there's no point in tracking smaller and smaller values of
+ * smoothed_alloc, so just reset it to exactly zero to avoid this
+ * syndrome. It will pop back up as soon as recent_alloc increases.
+ */
+ if (upcoming_alloc_est == 0)
+ smoothed_alloc = 0;
/*
* Even in cases where there's been little or no buffer allocation