Avoid dropping the heap page pin (xs_cbuf) and visibility map pin
(xs_vmbuffer) within heapam_index_fetch_reset. Retaining these pins
saves cycles during certain nested loop joins and merge joins that
frequently restore a saved mark: cases where the next tuple fetched
after a reset often falls on the same heap page will now avoid the cost
of repeated pinning and unpinning.
Avoiding dropping the scan's heap page buffer pin is preparation for an
upcoming patch that will add I/O prefetching to index scans. Testing of
that patch (which makes heapam tend to pin more buffers concurrently
than was typical before now) shows that the aforementioned cases get a
small but clearly measurable benefit from this optimization.
Upcoming work to add a slot-based table AM interface for index scans
(which is further preparation for prefetching) will move VM checks for
index-only scans out of the executor and into heapam. That will expand
the role of xs_vmbuffer to include VM lookups for index-only scans (the
field won't just be used for setting pages all-visible during on-access
pruning via the enhancement recently introduced by commit
b46e1e54).
Avoiding dropping the xs_vmbuffer pin will preserve the historical
behavior of nodeIndexonlyscan.c, which always kept this pin on a rescan;
that aspect of this commit isn't really new.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAH2-Wz=g=JTSyDB4UtB5su2ZcvsS7VbP+ZMvvaG6ABoCb+s8Lw@mail.gmail.com
void
heapam_index_fetch_reset(IndexFetchTableData *scan)
{
- IndexFetchHeapData *hscan = (IndexFetchHeapData *) scan;
-
- if (BufferIsValid(hscan->xs_cbuf))
- {
- ReleaseBuffer(hscan->xs_cbuf);
- hscan->xs_cbuf = InvalidBuffer;
- hscan->xs_blk = InvalidBlockNumber;
- }
-
- if (BufferIsValid(hscan->xs_vmbuffer))
- {
- ReleaseBuffer(hscan->xs_vmbuffer);
- hscan->xs_vmbuffer = InvalidBuffer;
- }
+ /*
+ * Resets are a no-op.
+ *
+ * Deliberately avoid dropping pins now held in xs_cbuf and xs_vmbuffer.
+ * This saves cycles during certain tight nested loop joins (it can avoid
+ * repeated pinning and unpinning of the same buffer across rescans).
+ */
}
void
{
IndexFetchHeapData *hscan = (IndexFetchHeapData *) scan;
- heapam_index_fetch_reset(scan);
+ /* drop pin if there's a pinned heap page */
+ if (BufferIsValid(hscan->xs_cbuf))
+ ReleaseBuffer(hscan->xs_cbuf);
+
+ /* drop pin if there's a pinned visibility map page */
+ if (BufferIsValid(hscan->xs_vmbuffer))
+ ReleaseBuffer(hscan->xs_vmbuffer);
pfree(hscan);
}
Assert(nkeys == scan->numberOfKeys);
Assert(norderbys == scan->numberOfOrderBys);
- /* Release resources (like buffer pins) from table accesses */
+ /* reset table AM state for rescan */
if (scan->xs_heapfetch)
table_index_fetch_reset(scan->xs_heapfetch);
SCAN_CHECKS;
CHECK_SCAN_PROCEDURE(amrestrpos);
- /* release resources (like buffer pins) from table accesses */
+ /* reset table AM state for restoring the marked position */
if (scan->xs_heapfetch)
table_index_fetch_reset(scan->xs_heapfetch);
{
SCAN_CHECKS;
+ /* reset table AM state for rescan */
if (scan->xs_heapfetch)
table_index_fetch_reset(scan->xs_heapfetch);
/* If we're out of index entries, we're done */
if (!found)
{
- /* release resources (like buffer pins) from table accesses */
+ /* reset table AM state */
if (scan->xs_heapfetch)
table_index_fetch_reset(scan->xs_heapfetch);