summaryrefslogtreecommitdiff
path: root/doc/src
diff options
context:
space:
mode:
Diffstat (limited to 'doc/src')
-rw-r--r--doc/src/sgml/config.sgml36
1 files changed, 18 insertions, 18 deletions
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index 0a2a8b49fdb..06d1e4403b5 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -5924,24 +5924,24 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class="
</para>
<para>
- Random access to mechanical disk storage is normally much more expensive
- than four times sequential access. However, a lower default is used
- (4.0) because the majority of random accesses to disk, such as indexed
- reads, are assumed to be in cache. The default value can be thought of
- as modeling random access as 40 times slower than sequential, while
- expecting 90% of random reads to be cached.
- </para>
-
- <para>
- If you believe a 90% cache rate is an incorrect assumption
- for your workload, you can increase random_page_cost to better
- reflect the true cost of random storage reads. Correspondingly,
- if your data is likely to be completely in cache, such as when
- the database is smaller than the total server memory, decreasing
- random_page_cost can be appropriate. Storage that has a low random
- read cost relative to sequential, e.g., solid-state drives, might
- also be better modeled with a lower value for random_page_cost,
- e.g., <literal>1.1</literal>.
+ Random access to durable storage is normally much more expensive
+ than four times sequential access. However, a lower default is
+ used (4.0) because the majority of random accesses to storage,
+ such as indexed reads, are assumed to be in cache. Also, the
+ latency of network-attached storage tends to reduce the relative
+ overhead of random access.
+ </para>
+
+ <para>
+ If you believe caching is less frequent than the default
+ value reflects, and network latency is minimal, you can increase
+ random_page_cost to better reflect the true cost of random storage
+ reads. Storage that has a higher random read cost relative to
+ sequential, like magnetic disks, might also be better modeled with
+ a higher value for random_page_cost. Correspondingly, if your data
+ is likely to be completely in cache, such as when the database
+ is smaller than the total server memory, or network latency is
+ high, decreasing random_page_cost might be appropriate.
</para>
<tip>