Make nbtree treat all index tuples as having a heap TID attribute.
Index searches can distinguish duplicates by heap TID, since heap TID is
always guaranteed to be unique. This general approach has numerous
benefits for performance, and is prerequisite to teaching VACUUM to
perform "retail index tuple deletion".
Naively adding a new attribute to every pivot tuple has unacceptable
overhead (it bloats internal pages), so suffix truncation of pivot
tuples is added. This will usually truncate away the "extra" heap TID
attribute from pivot tuples during a leaf page split, and may also
truncate away additional user attributes. This can increase fan-out,
especially in a multi-column index. Truncation can only occur at the
attribute granularity, which isn't particularly effective, but works
well enough for now. A future patch may add support for truncating
"within" text attributes by generating truncated key values using new
opclass infrastructure.
Only new indexes (BTREE_VERSION 4 indexes) will have insertions that
treat heap TID as a tiebreaker attribute, or will have pivot tuples
undergo suffix truncation during a leaf page split (on-disk
compatibility with versions 2 and 3 is preserved). Upgrades to version
4 cannot be performed on-the-fly, unlike upgrades from version 2 to
version 3. contrib/amcheck continues to work with version 2 and 3
indexes, while also enforcing stricter invariants when verifying version
4 indexes. These stricter invariants are the same invariants described
by "3.1.12 Sequencing" from the Lehman and Yao paper.
A later patch will enhance the logic used by nbtree to pick a split
point. This patch is likely to negatively impact performance without
smarter choices around the precise point to split leaf pages at. Making
these two mostly-distinct sets of enhancements into distinct commits
seems like it might clarify their design, even though neither commit is
particularly useful on its own.
The maximum allowed size of new tuples is reduced by an amount equal to
the space required to store an extra MAXALIGN()'d TID in a new high key
during leaf page splits. The user-facing definition of the "1/3 of a
page" restriction is already imprecise, and so does not need to be
revised. However, there should be a compatibility note in the v12
release notes.
Author: Peter Geoghegan
Reviewed-By: Heikki Linnakangas, Alexander Korotkov
Discussion: https://postgr.es/m/CAH2-WzkVb0Kom=R+88fDFb=JSxZMFvbHVC6Mn9LJ2n=X=kS-Uw@mail.gmail.com
--
INSERT INTO delete_test_table SELECT i, 1, 2, 3 FROM generate_series(1,80000) i;
ALTER TABLE delete_test_table ADD PRIMARY KEY (a,b,c,d);
+-- Delete many entries, and vacuum. This causes page deletions.
DELETE FROM delete_test_table WHERE a > 40000;
VACUUM delete_test_table;
-DELETE FROM delete_test_table WHERE a > 10;
+-- Delete most entries, and vacuum, deleting internal pages and creating "fast
+-- root"
+DELETE FROM delete_test_table WHERE a < 79990;
VACUUM delete_test_table;
SELECT bt_index_parent_check('delete_test_table_pkey', true);
bt_index_parent_check
--
INSERT INTO delete_test_table SELECT i, 1, 2, 3 FROM generate_series(1,80000) i;
ALTER TABLE delete_test_table ADD PRIMARY KEY (a,b,c,d);
+-- Delete many entries, and vacuum. This causes page deletions.
DELETE FROM delete_test_table WHERE a > 40000;
VACUUM delete_test_table;
-DELETE FROM delete_test_table WHERE a > 10;
+-- Delete most entries, and vacuum, deleting internal pages and creating "fast
+-- root"
+DELETE FROM delete_test_table WHERE a < 79990;
VACUUM delete_test_table;
SELECT bt_index_parent_check('delete_test_table_pkey', true);
* block per level, which is bound by the range of BlockNumber:
*/
#define InvalidBtreeLevel ((uint32) InvalidBlockNumber)
+#define BTreeTupleGetNKeyAtts(itup, rel) \
+ Min(IndexRelationGetNumberOfKeyAttributes(rel), BTreeTupleGetNAtts(itup, rel))
/*
* State associated with verifying a B-Tree index
/* B-Tree Index Relation and associated heap relation */
Relation rel;
Relation heaprel;
+ /* rel is heapkeyspace index? */
+ bool heapkeyspace;
/* ShareLock held on heap/index, rather than AccessShareLock? */
bool readonly;
/* Also verifying heap has no unindexed tuples? */
bool heapallindexed);
static inline void btree_index_checkable(Relation rel);
static void bt_check_every_level(Relation rel, Relation heaprel,
- bool readonly, bool heapallindexed);
+ bool heapkeyspace, bool readonly, bool heapallindexed);
static BtreeLevel bt_check_level_from_leftmost(BtreeCheckState *state,
BtreeLevel level);
static void bt_target_page_check(BtreeCheckState *state);
IndexTuple itup);
static inline bool offset_is_negative_infinity(BTPageOpaque opaque,
OffsetNumber offset);
+static inline bool invariant_l_offset(BtreeCheckState *state, BTScanInsert key,
+ OffsetNumber upperbound);
static inline bool invariant_leq_offset(BtreeCheckState *state,
BTScanInsert key,
OffsetNumber upperbound);
-static inline bool invariant_geq_offset(BtreeCheckState *state,
- BTScanInsert key,
- OffsetNumber lowerbound);
-static inline bool invariant_leq_nontarget_offset(BtreeCheckState *state,
- BTScanInsert key,
- Page nontarget,
- OffsetNumber upperbound);
+static inline bool invariant_g_offset(BtreeCheckState *state, BTScanInsert key,
+ OffsetNumber lowerbound);
+static inline bool invariant_l_nontarget_offset(BtreeCheckState *state,
+ BTScanInsert key,
+ Page nontarget,
+ OffsetNumber upperbound);
static Page palloc_btree_page(BtreeCheckState *state, BlockNumber blocknum);
+static inline BTScanInsert bt_mkscankey_pivotsearch(Relation rel,
+ IndexTuple itup);
+static inline ItemPointer BTreeTupleGetHeapTIDCareful(BtreeCheckState *state,
+ IndexTuple itup, bool nonpivot);
/*
* bt_index_check(index regclass, heapallindexed boolean)
Oid heapid;
Relation indrel;
Relation heaprel;
+ bool heapkeyspace;
LOCKMODE lockmode;
if (parentcheck)
btree_index_checkable(indrel);
/* Check index, possibly against table it is an index on */
- bt_check_every_level(indrel, heaprel, parentcheck, heapallindexed);
+ heapkeyspace = _bt_heapkeyspace(indrel);
+ bt_check_every_level(indrel, heaprel, heapkeyspace, parentcheck,
+ heapallindexed);
/*
* Release locks early. That's ok here because nothing in the called
* parent/child check cannot be affected.)
*/
static void
-bt_check_every_level(Relation rel, Relation heaprel, bool readonly,
- bool heapallindexed)
+bt_check_every_level(Relation rel, Relation heaprel, bool heapkeyspace,
+ bool readonly, bool heapallindexed)
{
BtreeCheckState *state;
Page metapage;
state = palloc0(sizeof(BtreeCheckState));
state->rel = rel;
state->heaprel = heaprel;
+ state->heapkeyspace = heapkeyspace;
state->readonly = readonly;
state->heapallindexed = heapallindexed;
* doesn't contain a high key, so nothing to check
*/
if (!P_RIGHTMOST(topaque) &&
- !_bt_check_natts(state->rel, state->target, P_HIKEY))
+ !_bt_check_natts(state->rel, state->heapkeyspace, state->target,
+ P_HIKEY))
{
ItemId itemid;
IndexTuple itup;
IndexTuple itup;
size_t tupsize;
BTScanInsert skey;
+ bool lowersizelimit;
CHECK_FOR_INTERRUPTS();
errhint("This could be a torn page problem.")));
/* Check the number of index tuple attributes */
- if (!_bt_check_natts(state->rel, state->target, offset))
+ if (!_bt_check_natts(state->rel, state->heapkeyspace, state->target,
+ offset))
{
char *itid,
*htid;
continue;
/* Build insertion scankey for current page offset */
- skey = _bt_mkscankey(state->rel, itup);
+ skey = bt_mkscankey_pivotsearch(state->rel, itup);
+
+ /*
+ * Make sure tuple size does not exceed the relevant BTREE_VERSION
+ * specific limit.
+ *
+ * BTREE_VERSION 4 (which introduced heapkeyspace rules) requisitioned
+ * a small amount of space from BTMaxItemSize() in order to ensure
+ * that suffix truncation always has enough space to add an explicit
+ * heap TID back to a tuple -- we pessimistically assume that every
+ * newly inserted tuple will eventually need to have a heap TID
+ * appended during a future leaf page split, when the tuple becomes
+ * the basis of the new high key (pivot tuple) for the leaf page.
+ *
+ * Since the reclaimed space is reserved for that purpose, we must not
+ * enforce the slightly lower limit when the extra space has been used
+ * as intended. In other words, there is only a cross-version
+ * difference in the limit on tuple size within leaf pages.
+ *
+ * Still, we're particular about the details within BTREE_VERSION 4
+ * internal pages. Pivot tuples may only use the extra space for its
+ * designated purpose. Enforce the lower limit for pivot tuples when
+ * an explicit heap TID isn't actually present. (In all other cases
+ * suffix truncation is guaranteed to generate a pivot tuple that's no
+ * larger than the first right tuple provided to it by its caller.)
+ */
+ lowersizelimit = skey->heapkeyspace &&
+ (P_ISLEAF(topaque) || BTreeTupleGetHeapTID(itup) == NULL);
+ if (tupsize > (lowersizelimit ? BTMaxItemSize(state->target) :
+ BTMaxItemSizeNoHeapTid(state->target)))
+ {
+ char *itid,
+ *htid;
+
+ itid = psprintf("(%u,%u)", state->targetblock, offset);
+ htid = psprintf("(%u,%u)",
+ ItemPointerGetBlockNumberNoCheck(&(itup->t_tid)),
+ ItemPointerGetOffsetNumberNoCheck(&(itup->t_tid)));
+
+ ereport(ERROR,
+ (errcode(ERRCODE_INDEX_CORRUPTED),
+ errmsg("index row size %zu exceeds maximum for index \"%s\"",
+ tupsize, RelationGetRelationName(state->rel)),
+ errdetail_internal("Index tid=%s points to %s tid=%s page lsn=%X/%X.",
+ itid,
+ P_ISLEAF(topaque) ? "heap" : "index",
+ htid,
+ (uint32) (state->targetlsn >> 32),
+ (uint32) state->targetlsn)));
+ }
/* Fingerprint leaf page tuples (those that point to the heap) */
if (state->heapallindexed && P_ISLEAF(topaque) && !ItemIdIsDead(itemid))
* grandparents (as well as great-grandparents, and so on). We don't
* go to those lengths because that would be prohibitively expensive,
* and probably not markedly more effective in practice.
+ *
+ * On the leaf level, we check that the key is <= the highkey.
+ * However, on non-leaf levels we check that the key is < the highkey,
+ * because the high key is "just another separator" rather than a copy
+ * of some existing key item; we expect it to be unique among all keys
+ * on the same level. (Suffix truncation will sometimes produce a
+ * leaf highkey that is an untruncated copy of the lastleft item, but
+ * never any other item, which necessitates weakening the leaf level
+ * check to <=.)
+ *
+ * Full explanation for why a highkey is never truly a copy of another
+ * item from the same level on internal levels:
+ *
+ * While the new left page's high key is copied from the first offset
+ * on the right page during an internal page split, that's not the
+ * full story. In effect, internal pages are split in the middle of
+ * the firstright tuple, not between the would-be lastleft and
+ * firstright tuples: the firstright key ends up on the left side as
+ * left's new highkey, and the firstright downlink ends up on the
+ * right side as right's new "negative infinity" item. The negative
+ * infinity tuple is truncated to zero attributes, so we're only left
+ * with the downlink. In other words, the copying is just an
+ * implementation detail of splitting in the middle of a (pivot)
+ * tuple. (See also: "Notes About Data Representation" in the nbtree
+ * README.)
*/
if (!P_RIGHTMOST(topaque) &&
- !invariant_leq_offset(state, skey, P_HIKEY))
+ !(P_ISLEAF(topaque) ? invariant_leq_offset(state, skey, P_HIKEY) :
+ invariant_l_offset(state, skey, P_HIKEY)))
{
char *itid,
*htid;
* * Item order check *
*
* Check that items are stored on page in logical order, by checking
- * current item is less than or equal to next item (if any).
+ * current item is strictly less than next item (if any).
*/
if (OffsetNumberNext(offset) <= max &&
- !invariant_leq_offset(state, skey,
- OffsetNumberNext(offset)))
+ !invariant_l_offset(state, skey, OffsetNumberNext(offset)))
{
char *itid,
*htid,
rightkey = bt_right_page_check_scankey(state);
if (rightkey &&
- !invariant_geq_offset(state, rightkey, max))
+ !invariant_g_offset(state, rightkey, max))
{
/*
* As explained at length in bt_right_page_check_scankey(),
* continued existence of target block as non-ignorable (not half-dead or
* deleted) implies that target page was not merged into from the right by
* deletion; the key space at or after target never moved left. Target's
- * parent either has the same downlink to target as before, or a <=
+ * parent either has the same downlink to target as before, or a <
* downlink due to deletion at the left of target. Target either has the
- * same highkey as before, or a highkey <= before when there is a page
+ * same highkey as before, or a highkey < before when there is a page
* split. (The rightmost concurrently-split-from-target-page page will
* still have the same highkey as target was originally found to have,
* which for our purposes is equivalent to target's highkey itself never
* memory remaining allocated.
*/
firstitup = (IndexTuple) PageGetItem(rightpage, rightitem);
- return _bt_mkscankey(state->rel, firstitup);
+ return bt_mkscankey_pivotsearch(state->rel, firstitup);
}
/*
/*
* Verify child page has the downlink key from target page (its parent) as
- * a lower bound.
+ * a lower bound; downlink must be strictly less than all keys on the
+ * page.
*
* Check all items, rather than checking just the first and trusting that
* the operator class obeys the transitive law.
{
/*
* Skip comparison of target page key against "negative infinity"
- * item, if any. Checking it would indicate that it's not an upper
- * bound, but that's only because of the hard-coding within
- * _bt_compare().
+ * item, if any. Checking it would indicate that it's not a strict
+ * lower bound, but that's only because of the hard-coding for
+ * negative infinity items within _bt_compare().
+ *
+ * If nbtree didn't truncate negative infinity tuples during internal
+ * page splits then we'd expect child's negative infinity key to be
+ * equal to the scankey/downlink from target/parent (it would be a
+ * "low key" in this hypothetical scenario, and so it would still need
+ * to be treated as a special case here).
+ *
+ * Negative infinity items can be thought of as a strict lower bound
+ * that works transitively, with the last non-negative-infinity pivot
+ * followed during a descent from the root as its "true" strict lower
+ * bound. Only a small number of negative infinity items are truly
+ * negative infinity; those that are the first items of leftmost
+ * internal pages. In more general terms, a negative infinity item is
+ * only negative infinity with respect to the subtree that the page is
+ * at the root of.
*/
if (offset_is_negative_infinity(copaque, offset))
continue;
- if (!invariant_leq_nontarget_offset(state, targetkey, child, offset))
+ if (!invariant_l_nontarget_offset(state, targetkey, child, offset))
ereport(ERROR,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("down-link lower bound invariant violated for index \"%s\"",
return !P_ISLEAF(opaque) && offset == P_FIRSTDATAKEY(opaque);
}
+/*
+ * Does the invariant hold that the key is strictly less than a given upper
+ * bound offset item?
+ *
+ * If this function returns false, convention is that caller throws error due
+ * to corruption.
+ */
+static inline bool
+invariant_l_offset(BtreeCheckState *state, BTScanInsert key,
+ OffsetNumber upperbound)
+{
+ int32 cmp;
+
+ Assert(key->pivotsearch);
+
+ /* pg_upgrade'd indexes may legally have equal sibling tuples */
+ if (!key->heapkeyspace)
+ return invariant_leq_offset(state, key, upperbound);
+
+ cmp = _bt_compare(state->rel, key, state->target, upperbound);
+
+ /*
+ * _bt_compare() is capable of determining that a scankey with a
+ * filled-out attribute is greater than pivot tuples where the comparison
+ * is resolved at a truncated attribute (value of attribute in pivot is
+ * minus infinity). However, it is not capable of determining that a
+ * scankey is _less than_ a tuple on the basis of a comparison resolved at
+ * _scankey_ minus infinity attribute. Complete an extra step to simulate
+ * having minus infinity values for omitted scankey attribute(s).
+ */
+ if (cmp == 0)
+ {
+ BTPageOpaque topaque;
+ ItemId itemid;
+ IndexTuple ritup;
+ int uppnkeyatts;
+ ItemPointer rheaptid;
+ bool nonpivot;
+
+ itemid = PageGetItemId(state->target, upperbound);
+ ritup = (IndexTuple) PageGetItem(state->target, itemid);
+ topaque = (BTPageOpaque) PageGetSpecialPointer(state->target);
+ nonpivot = P_ISLEAF(topaque) && upperbound >= P_FIRSTDATAKEY(topaque);
+
+ /* Get number of keys + heap TID for item to the right */
+ uppnkeyatts = BTreeTupleGetNKeyAtts(ritup, state->rel);
+ rheaptid = BTreeTupleGetHeapTIDCareful(state, ritup, nonpivot);
+
+ /* Heap TID is tiebreaker key attribute */
+ if (key->keysz == uppnkeyatts)
+ return key->scantid == NULL && rheaptid != NULL;
+
+ return key->keysz < uppnkeyatts;
+ }
+
+ return cmp < 0;
+}
+
/*
* Does the invariant hold that the key is less than or equal to a given upper
* bound offset item?
{
int32 cmp;
+ Assert(key->pivotsearch);
+
cmp = _bt_compare(state->rel, key, state->target, upperbound);
return cmp <= 0;
}
/*
- * Does the invariant hold that the key is greater than or equal to a given
- * lower bound offset item?
+ * Does the invariant hold that the key is strictly greater than a given lower
+ * bound offset item?
*
* If this function returns false, convention is that caller throws error due
* to corruption.
*/
static inline bool
-invariant_geq_offset(BtreeCheckState *state, BTScanInsert key,
- OffsetNumber lowerbound)
+invariant_g_offset(BtreeCheckState *state, BTScanInsert key,
+ OffsetNumber lowerbound)
{
int32 cmp;
+ Assert(key->pivotsearch);
+
cmp = _bt_compare(state->rel, key, state->target, lowerbound);
- return cmp >= 0;
+ /* pg_upgrade'd indexes may legally have equal sibling tuples */
+ if (!key->heapkeyspace)
+ return cmp >= 0;
+
+ /*
+ * No need to consider the possibility that scankey has attributes that we
+ * need to force to be interpreted as negative infinity. _bt_compare() is
+ * able to determine that scankey is greater than negative infinity. The
+ * distinction between "==" and "<" isn't interesting here, since
+ * corruption is indicated either way.
+ */
+ return cmp > 0;
}
/*
- * Does the invariant hold that the key is less than or equal to a given upper
+ * Does the invariant hold that the key is strictly less than a given upper
* bound offset item, with the offset relating to a caller-supplied page that
- * is not the current target page? Caller's non-target page is typically a
- * child page of the target, checked as part of checking a property of the
- * target page (i.e. the key comes from the target).
+ * is not the current target page?
+ *
+ * Caller's non-target page is a child page of the target, checked as part of
+ * checking a property of the target page (i.e. the key comes from the
+ * target).
*
* If this function returns false, convention is that caller throws error due
* to corruption.
*/
static inline bool
-invariant_leq_nontarget_offset(BtreeCheckState *state, BTScanInsert key,
- Page nontarget, OffsetNumber upperbound)
+invariant_l_nontarget_offset(BtreeCheckState *state, BTScanInsert key,
+ Page nontarget, OffsetNumber upperbound)
{
int32 cmp;
+ Assert(key->pivotsearch);
+
cmp = _bt_compare(state->rel, key, nontarget, upperbound);
- return cmp <= 0;
+ /* pg_upgrade'd indexes may legally have equal sibling tuples */
+ if (!key->heapkeyspace)
+ return cmp <= 0;
+
+ /* See invariant_l_offset() for an explanation of this extra step */
+ if (cmp == 0)
+ {
+ ItemId itemid;
+ IndexTuple child;
+ int uppnkeyatts;
+ ItemPointer childheaptid;
+ BTPageOpaque copaque;
+ bool nonpivot;
+
+ itemid = PageGetItemId(nontarget, upperbound);
+ child = (IndexTuple) PageGetItem(nontarget, itemid);
+ copaque = (BTPageOpaque) PageGetSpecialPointer(nontarget);
+ nonpivot = P_ISLEAF(copaque) && upperbound >= P_FIRSTDATAKEY(copaque);
+
+ /* Get number of keys + heap TID for child/non-target item */
+ uppnkeyatts = BTreeTupleGetNKeyAtts(child, state->rel);
+ childheaptid = BTreeTupleGetHeapTIDCareful(state, child, nonpivot);
+
+ /* Heap TID is tiebreaker key attribute */
+ if (key->keysz == uppnkeyatts)
+ return key->scantid == NULL && childheaptid != NULL;
+
+ return key->keysz < uppnkeyatts;
+ }
+
+ return cmp < 0;
}
/*
return page;
}
+
+/*
+ * _bt_mkscankey() wrapper that automatically prevents insertion scankey from
+ * being considered greater than the pivot tuple that its values originated
+ * from (or some other identical pivot tuple) in the common case where there
+ * are truncated/minus infinity attributes. Without this extra step, there
+ * are forms of corruption that amcheck could theoretically fail to report.
+ *
+ * For example, invariant_g_offset() might miss a cross-page invariant failure
+ * on an internal level if the scankey built from the first item on the
+ * target's right sibling page happened to be equal to (not greater than) the
+ * last item on target page. The !pivotsearch tiebreaker in _bt_compare()
+ * might otherwise cause amcheck to assume (rather than actually verify) that
+ * the scankey is greater.
+ */
+static inline BTScanInsert
+bt_mkscankey_pivotsearch(Relation rel, IndexTuple itup)
+{
+ BTScanInsert skey;
+
+ skey = _bt_mkscankey(rel, itup);
+ skey->pivotsearch = true;
+
+ return skey;
+}
+
+/*
+ * BTreeTupleGetHeapTID() wrapper that lets caller enforce that a heap TID must
+ * be present in cases where that is mandatory.
+ *
+ * This doesn't add much as of BTREE_VERSION 4, since the INDEX_ALT_TID_MASK
+ * bit is effectively a proxy for whether or not the tuple is a pivot tuple.
+ * It may become more useful in the future, when non-pivot tuples support their
+ * own alternative INDEX_ALT_TID_MASK representation.
+ */
+static inline ItemPointer
+BTreeTupleGetHeapTIDCareful(BtreeCheckState *state, IndexTuple itup,
+ bool nonpivot)
+{
+ ItemPointer result = BTreeTupleGetHeapTID(itup);
+ BlockNumber targetblock = state->targetblock;
+
+ if (result == NULL && nonpivot)
+ ereport(ERROR,
+ (errcode(ERRCODE_INDEX_CORRUPTED),
+ errmsg("block %u or its right sibling block or child block in index \"%s\" contains non-pivot tuple that lacks a heap TID",
+ targetblock, RelationGetRelationName(state->rel))));
+
+ return result;
+}
* Get values of extended metadata if available, use default values
* otherwise.
*/
- if (metad->btm_version == BTREE_VERSION)
+ if (metad->btm_version >= BTREE_NOVAC_VERSION)
{
values[j++] = psprintf("%u", metad->btm_oldest_btpo_xact);
values[j++] = psprintf("%f", metad->btm_last_cleanup_num_heap_tuples);
SELECT * FROM bt_metap('test1_a_idx');
-[ RECORD 1 ]-----------+-------
magic | 340322
-version | 3
+version | 4
root | 1
level | 0
fastroot | 1
from pgstatindex('test_pkey');
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
- 3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
+ 4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
(1 row)
select version, tree_level,
from pgstatindex('test_pkey'::text);
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
- 3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
+ 4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
(1 row)
select version, tree_level,
from pgstatindex('test_pkey'::name);
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
- 3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
+ 4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
(1 row)
select version, tree_level,
from pgstatindex('test_pkey'::regclass);
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
- 3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
+ 4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
(1 row)
select pg_relpages('test');
select pgstatindex('test_partition_idx');
pgstatindex
------------------------------
- (3,0,8192,0,0,0,0,0,NaN,NaN)
+ (4,0,8192,0,0,0,0,0,NaN,NaN)
(1 row)
select pgstathashindex('test_partition_hash_idx');
<para>
By default, B-tree indexes store their entries in ascending order
- with nulls last. This means that a forward scan of an index on
- column <literal>x</literal> produces output satisfying <literal>ORDER BY x</literal>
+ with nulls last (table TID is treated as a tiebreaker column among
+ otherwise equal entries). This means that a forward scan of an
+ index on column <literal>x</literal> produces output satisfying <literal>ORDER BY x</literal>
(or more verbosely, <literal>ORDER BY x ASC NULLS LAST</literal>). The
index can also be scanned backward, producing output satisfying
<literal>ORDER BY x DESC</literal>
the extra columns are trailing columns; making them be leading columns is
unwise for the reasons explained in <xref linkend="indexes-multicolumn"/>.
However, this method doesn't support the case where you want the index to
- enforce uniqueness on the key column(s). Also, explicitly marking
- non-searchable columns as <literal>INCLUDE</literal> columns makes the
- index slightly smaller, because such columns need not be stored in upper
- tree levels.
+ enforce uniqueness on the key column(s).
+ </para>
+
+ <para>
+ <firstterm>Suffix truncation</firstterm> always removes non-key
+ columns from upper B-Tree levels. As payload columns, they are
+ never used to guide index scans. The truncation process also
+ removes one or more trailing key column(s) when the remaining
+ prefix of key column(s) happens to be sufficient to describe tuples
+ on the lowest B-Tree level. In practice, covering indexes without
+ an <literal>INCLUDE</literal> clause often avoid storing columns
+ that are effectively payload in the upper levels. However,
+ explicitly defining payload columns as non-key columns
+ <emphasis>reliably</emphasis> keeps the tuples in upper levels
+ small.
</para>
<para>
bool isnull[INDEX_MAX_KEYS];
IndexTuple truncated;
- Assert(leavenatts < sourceDescriptor->natts);
+ Assert(leavenatts <= sourceDescriptor->natts);
+
+ /* Easy case: no truncation actually required */
+ if (leavenatts == sourceDescriptor->natts)
+ return CopyIndexTuple(source);
/* Create temporary descriptor to scribble on */
truncdesc = palloc(TupleDescSize(sourceDescriptor));
for. This might need to be repeated, if the page has been split more than
once.
+Lehman and Yao talk about alternating "separator" keys and downlinks in
+internal pages rather than tuples or records. We use the term "pivot"
+tuple to refer to tuples which don't point to heap tuples, that are used
+only for tree navigation. All tuples on non-leaf pages and high keys on
+leaf pages are pivot tuples. Since pivot tuples are only used to represent
+which part of the key space belongs on each page, they can have attribute
+values copied from non-pivot tuples that were deleted and killed by VACUUM
+some time ago. A pivot tuple may contain a "separator" key and downlink,
+just a separator key (i.e. the downlink value is implicitly undefined), or
+just a downlink (i.e. all attributes are truncated away).
+
+The requirement that all btree keys be unique is satisfied by treating heap
+TID as a tiebreaker attribute. Logical duplicates are sorted in heap TID
+order. This is necessary because Lehman and Yao also require that the key
+range for a subtree S is described by Ki < v <= Ki+1 where Ki and Ki+1 are
+the adjacent keys in the parent page (Ki must be _strictly_ less than v,
+which is assured by having reliably unique keys). Keys are always unique
+on their level, with the exception of a leaf page's high key, which can be
+fully equal to the last item on the page.
+
+The Postgres implementation of suffix truncation must make sure that the
+Lehman and Yao invariants hold, and represents that absent/truncated
+attributes in pivot tuples have the sentinel value "minus infinity". The
+later section on suffix truncation will be helpful if it's unclear how the
+Lehman & Yao invariants work with a real world example.
+
Differences to the Lehman & Yao algorithm
-----------------------------------------
We have made the following changes in order to incorporate the L&Y algorithm
into Postgres:
-The requirement that all btree keys be unique is too onerous,
-but the algorithm won't work correctly without it. Fortunately, it is
-only necessary that keys be unique on a single tree level, because L&Y
-only use the assumption of key uniqueness when re-finding a key in a
-parent page (to determine where to insert the key for a split page).
-Therefore, we can use the link field to disambiguate multiple
-occurrences of the same user key: only one entry in the parent level
-will be pointing at the page we had split. (Indeed we need not look at
-the real "key" at all, just at the link field.) We can distinguish
-items at the leaf level in the same way, by examining their links to
-heap tuples; we'd never have two items for the same heap tuple.
-
-Lehman and Yao assume that the key range for a subtree S is described
-by Ki < v <= Ki+1 where Ki and Ki+1 are the adjacent keys in the parent
-page. This does not work for nonunique keys (for example, if we have
-enough equal keys to spread across several leaf pages, there *must* be
-some equal bounding keys in the first level up). Therefore we assume
-Ki <= v <= Ki+1 instead. A search that finds exact equality to a
-bounding key in an upper tree level must descend to the left of that
-key to ensure it finds any equal keys in the preceding page. An
-insertion that sees the high key of its target page is equal to the key
-to be inserted has a choice whether or not to move right, since the new
-key could go on either page. (Currently, we try to find a page where
-there is room for the new key without a split.)
-
Lehman and Yao don't require read locks, but assume that in-memory
copies of tree pages are unshared. Postgres shares in-memory buffers
among backends. As a result, we do page-level read locking on btree
the recorded position (but it can't have moved left out of the recorded
page). Since we hold a lock on the lower page (per L&Y) until we have
re-found the parent item that links to it, we can be assured that the
-parent item does still exist and can't have been deleted. Also, because
-we are matching downlink page numbers and not data keys, we don't have any
-problem with possibly misidentifying the parent item.
+parent item does still exist and can't have been deleted.
Page Deletion
-------------
whether to return the entry and whether the scan can stop (see
_bt_checkkeys()).
-We use term "pivot" index tuples to distinguish tuples which don't point
-to heap tuples, but rather used for tree navigation. Pivot tuples includes
-all tuples on non-leaf pages and high keys on leaf pages. Note that pivot
-index tuples are only used to represent which part of the key space belongs
-on each page, and can have attribute values copied from non-pivot tuples
-that were deleted and killed by VACUUM some time ago. In principle, we could
-truncate away attributes that are not needed for a page high key during a leaf
-page split, provided that the remaining attributes distinguish the last index
-tuple on the post-split left page as belonging on the left page, and the first
-index tuple on the post-split right page as belonging on the right page. This
-optimization is sometimes called suffix truncation, and may appear in a future
-release. Since the high key is subsequently reused as the downlink in the
-parent page for the new right page, suffix truncation can increase index
-fan-out considerably by keeping pivot tuples short. INCLUDE indexes similarly
-truncate away non-key attributes at the time of a leaf page split,
-increasing fan-out.
+Notes about suffix truncation
+-----------------------------
+
+We truncate away suffix key attributes that are not needed for a page high
+key during a leaf page split. The remaining attributes must distinguish
+the last index tuple on the post-split left page as belonging on the left
+page, and the first index tuple on the post-split right page as belonging
+on the right page. Tuples logically retain truncated key attributes,
+though they implicitly have "negative infinity" as their value, and have no
+storage overhead. Since the high key is subsequently reused as the
+downlink in the parent page for the new right page, suffix truncation makes
+pivot tuples short. INCLUDE indexes are guaranteed to have non-key
+attributes truncated at the time of a leaf page split, but may also have
+some key attributes truncated away, based on the usual criteria for key
+attributes. They are not a special case, since non-key attributes are
+merely payload to B-Tree searches.
+
+The goal of suffix truncation of key attributes is to improve index
+fan-out. The technique was first described by Bayer and Unterauer (R.Bayer
+and K.Unterauer, Prefix B-Trees, ACM Transactions on Database Systems, Vol
+2, No. 1, March 1977, pp 11-26). The Postgres implementation is loosely
+based on their paper. Note that Postgres only implements what the paper
+refers to as simple prefix B-Trees. Note also that the paper assumes that
+the tree has keys that consist of single strings that maintain the "prefix
+property", much like strings that are stored in a suffix tree (comparisons
+of earlier bytes must always be more significant than comparisons of later
+bytes, and, in general, the strings must compare in a way that doesn't
+break transitive consistency as they're split into pieces). Suffix
+truncation in Postgres currently only works at the whole-attribute
+granularity, but it would be straightforward to invent opclass
+infrastructure that manufactures a smaller attribute value in the case of
+variable-length types, such as text. An opclass support function could
+manufacture the shortest possible key value that still correctly separates
+each half of a leaf page split.
Notes About Data Representation
-------------------------------
The Postgres disk block data format (an array of items) doesn't fit
Lehman and Yao's alternating-keys-and-pointers notion of a disk page,
-so we have to play some games.
+so we have to play some games. (The alternating-keys-and-pointers
+notion is important for internal page splits, which conceptually split
+at the middle of an existing pivot tuple -- the tuple's "separator" key
+goes on the left side of the split as the left side's new high key,
+while the tuple's pointer/downlink goes on the right side as the
+first/minus infinity downlink.)
On a page that is not rightmost in its tree level, the "high key" is
kept in the page's first item, and real data items start at item 2.
The link portion of the "high key" item goes unused. A page that is
-rightmost has no "high key", so data items start with the first item.
-Putting the high key at the left, rather than the right, may seem odd,
-but it avoids moving the high key as we add data items.
+rightmost has no "high key" (it's implicitly positive infinity), so
+data items start with the first item. Putting the high key at the
+left, rather than the right, may seem odd, but it avoids moving the
+high key as we add data items.
On a leaf page, the data items are simply links to (TIDs of) tuples
in the relation being indexed, with the associated key values.
On a non-leaf page, the data items are down-links to child pages with
-bounding keys. The key in each data item is the *lower* bound for
+bounding keys. The key in each data item is a strict lower bound for
keys on that child page, so logically the key is to the left of that
downlink. The high key (if present) is the upper bound for the last
downlink. The first data item on each such page has no lower bound
routines must treat it accordingly. The actual key stored in the
item is irrelevant, and need not be stored at all. This arrangement
corresponds to the fact that an L&Y non-leaf page has one more pointer
-than key.
+than key. Suffix truncation's negative infinity attributes behave in
+the same way.
BTStack stack,
Relation heapRel);
static void _bt_stepright(Relation rel, BTInsertState insertstate, BTStack stack);
-static void _bt_insertonpg(Relation rel, Buffer buf, Buffer cbuf,
+static void _bt_insertonpg(Relation rel, BTScanInsert itup_key,
+ Buffer buf,
+ Buffer cbuf,
BTStack stack,
IndexTuple itup,
OffsetNumber newitemoff,
bool split_only_page);
-static Buffer _bt_split(Relation rel, Buffer buf, Buffer cbuf,
- OffsetNumber firstright, OffsetNumber newitemoff, Size newitemsz,
- IndexTuple newitem, bool newitemonleft);
+static Buffer _bt_split(Relation rel, BTScanInsert itup_key, Buffer buf,
+ Buffer cbuf, OffsetNumber firstright, OffsetNumber newitemoff,
+ Size newitemsz, IndexTuple newitem, bool newitemonleft);
static void _bt_insert_parent(Relation rel, Buffer buf, Buffer rbuf,
BTStack stack, bool is_root, bool is_only);
static OffsetNumber _bt_findsplitloc(Relation rel, Page page,
/* we need an insertion scan key to do our search, so build one */
itup_key = _bt_mkscankey(rel, itup);
+ /* No scantid until uniqueness established in checkingunique case */
+ if (checkingunique && itup_key->heapkeyspace)
+ itup_key->scantid = NULL;
/*
* Fill in the BTInsertState working area, to track the current page and
* NOTE: obviously, _bt_check_unique can only detect keys that are already
* in the index; so it cannot defend against concurrent insertions of the
* same key. We protect against that by means of holding a write lock on
- * the target page. Any other would-be inserter of the same key must
- * acquire a write lock on the same target page, so only one would-be
- * inserter can be making the check at one time. Furthermore, once we are
- * past the check we hold write locks continuously until we have performed
- * our insertion, so no later inserter can fail to see our insertion.
- * (This requires some care in _bt_findinsertloc.)
+ * the first page the value could be on, regardless of the value of its
+ * implicit heap TID tiebreaker attribute. Any other would-be inserter of
+ * the same key must acquire a write lock on the same page, so only one
+ * would-be inserter can be making the check at one time. Furthermore,
+ * once we are past the check we hold write locks continuously until we
+ * have performed our insertion, so no later inserter can fail to see our
+ * insertion. (This requires some care in _bt_findinsertloc.)
*
* If we must wait for another xact, we release the lock while waiting,
* and then must start over completely.
_bt_freestack(stack);
goto top;
}
+
+ /* Uniqueness is established -- restore heap tid as scantid */
+ if (itup_key->heapkeyspace)
+ itup_key->scantid = &itup->t_tid;
}
if (checkUnique != UNIQUE_CHECK_EXISTING)
/*
* The only conflict predicate locking cares about for indexes is when
- * an index tuple insert conflicts with an existing lock. Since the
- * actual location of the insert is hard to predict because of the
- * random search used to prevent O(N^2) performance when there are
- * many duplicate entries, we can just use the "first valid" page.
- * This reasoning also applies to INCLUDE indexes, whose extra
- * attributes are not considered part of the key space.
+ * an index tuple insert conflicts with an existing lock. We don't
+ * know the actual page we're going to insert to yet because scantid
+ * was not filled in initially, but it's okay to use the "first valid"
+ * page instead. This reasoning also applies to INCLUDE indexes,
+ * whose extra attributes are not considered part of the key space.
*/
CheckForSerializableConflictIn(rel, NULL, insertstate.buf);
*/
newitemoff = _bt_findinsertloc(rel, &insertstate, checkingunique,
stack, heapRel);
- _bt_insertonpg(rel, insertstate.buf, InvalidBuffer, stack, itup,
- newitemoff, false);
+ _bt_insertonpg(rel, itup_key, insertstate.buf, InvalidBuffer, stack,
+ itup, newitemoff, false);
}
else
{
* Scan over all equal tuples, looking for live conflicts.
*/
Assert(!insertstate->bounds_valid || insertstate->low == offset);
+ Assert(itup_key->scantid == NULL);
for (;;)
{
ItemId curitemid;
/*
* _bt_findinsertloc() -- Finds an insert location for a tuple
*
- * On entry, insertstate buffer contains the first legal page the new
- * tuple could be inserted to. It is exclusive-locked and pinned by the
- * caller.
+ * On entry, insertstate buffer contains the page the new tuple belongs
+ * on. It is exclusive-locked and pinned by the caller.
+ *
+ * If 'checkingunique' is true, the buffer on entry is the first page
+ * that contains duplicates of the new key. If there are duplicates on
+ * multiple pages, the correct insertion position might be some page to
+ * the right, rather than the first page. In that case, this function
+ * moves right to the correct target page.
*
- * If the new key is equal to one or more existing keys, we can
- * legitimately place it anywhere in the series of equal keys --- in fact,
- * if the new key is equal to the page's "high key" we can place it on
- * the next page. If it is equal to the high key, and there's not room
- * to insert the new tuple on the current page without splitting, then
- * we can move right hoping to find more free space and avoid a split.
- * Furthermore, if there's not enough room on a page, we try to make
- * room by removing any LP_DEAD tuples.
+ * (In a !heapkeyspace index, there can be multiple pages with the same
+ * high key, where the new tuple could legitimately be placed on. In
+ * that case, the caller passes the first page containing duplicates,
+ * just like when checkinunique=true. If that page doesn't have enough
+ * room for the new tuple, this function moves right, trying to find a
+ * legal page that does.)
*
* On exit, insertstate buffer contains the chosen insertion page, and
* the offset within that page is returned. If _bt_findinsertloc needed
* If insertstate contains cached binary search bounds, we will take
* advantage of them. This avoids repeating comparisons that we made in
* _bt_check_unique() already.
+ *
+ * If there is not enough room on the page for the new tuple, we try to
+ * make room by removing any LP_DEAD tuples.
*/
static OffsetNumber
_bt_findinsertloc(Relation rel,
lpageop = (BTPageOpaque) PageGetSpecialPointer(page);
- /*
- * Check whether the item can fit on a btree page at all. (Eventually, we
- * ought to try to apply TOAST methods if not.) We actually need to be
- * able to fit three items on every page, so restrict any one item to 1/3
- * the per-page available space. Note that at this point, itemsz doesn't
- * include the ItemId.
- *
- * NOTE: if you change this, see also the similar code in _bt_buildadd().
- */
- if (insertstate->itemsz > BTMaxItemSize(page))
- ereport(ERROR,
- (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
- errmsg("index row size %zu exceeds maximum %zu for index \"%s\"",
- insertstate->itemsz, BTMaxItemSize(page),
- RelationGetRelationName(rel)),
- errhint("Values larger than 1/3 of a buffer page cannot be indexed.\n"
- "Consider a function index of an MD5 hash of the value, "
- "or use full text indexing."),
- errtableconstraint(heapRel,
- RelationGetRelationName(rel))));
+ /* Check 1/3 of a page restriction */
+ if (unlikely(insertstate->itemsz > BTMaxItemSize(page)))
+ _bt_check_third_page(rel, heapRel, itup_key->heapkeyspace, page,
+ insertstate->itup);
- /*----------
- * If we will need to split the page to put the item on this page,
- * check whether we can put the tuple somewhere to the right,
- * instead. Keep scanning right until we
- * (a) find a page with enough free space,
- * (b) reach the last page where the tuple can legally go, or
- * (c) get tired of searching.
- * (c) is not flippant; it is important because if there are many
- * pages' worth of equal keys, it's better to split one of the early
- * pages than to scan all the way to the end of the run of equal keys
- * on every insert. We implement "get tired" as a random choice,
- * since stopping after scanning a fixed number of pages wouldn't work
- * well (we'd never reach the right-hand side of previously split
- * pages). Currently the probability of moving right is set at 0.99,
- * which may seem too high to change the behavior much, but it does an
- * excellent job of preventing O(N^2) behavior with many equal keys.
- *----------
- */
Assert(P_ISLEAF(lpageop) && !P_INCOMPLETE_SPLIT(lpageop));
Assert(!insertstate->bounds_valid || checkingunique);
+ Assert(!itup_key->heapkeyspace || itup_key->scantid != NULL);
+ Assert(itup_key->heapkeyspace || itup_key->scantid == NULL);
- while (PageGetFreeSpace(page) < insertstate->itemsz)
+ if (itup_key->heapkeyspace)
{
/*
- * before considering moving right, see if we can obtain enough space
- * by erasing LP_DEAD items
+ * If we're inserting into a unique index, we may have to walk right
+ * through leaf pages to find the one leaf page that we must insert on
+ * to.
+ *
+ * This is needed for checkingunique callers because a scantid was not
+ * used when we called _bt_search(). scantid can only be set after
+ * _bt_check_unique() has checked for duplicates. The buffer
+ * initially stored in insertstate->buf has the page where the first
+ * duplicate key might be found, which isn't always the page that new
+ * tuple belongs on. The heap TID attribute for new tuple (scantid)
+ * could force us to insert on a sibling page, though that should be
+ * very rare in practice.
*/
- if (P_HAS_GARBAGE(lpageop))
+ if (checkingunique)
{
- _bt_vacuum_one_page(rel, insertstate->buf, heapRel);
- insertstate->bounds_valid = false;
+ for (;;)
+ {
+ /*
+ * Does the new tuple belong on this page?
+ *
+ * The earlier _bt_check_unique() call may well have
+ * established a strict upper bound on the offset for the new
+ * item. If it's not the last item of the page (i.e. if there
+ * is at least one tuple on the page that goes after the tuple
+ * we're inserting) then we know that the tuple belongs on
+ * this page. We can skip the high key check.
+ */
+ if (insertstate->bounds_valid &&
+ insertstate->low <= insertstate->stricthigh &&
+ insertstate->stricthigh <= PageGetMaxOffsetNumber(page))
+ break;
+
+ /* Test '<=', not '!=', since scantid is set now */
+ if (P_RIGHTMOST(lpageop) ||
+ _bt_compare(rel, itup_key, page, P_HIKEY) <= 0)
+ break;
- if (PageGetFreeSpace(page) >= insertstate->itemsz)
- break; /* OK, now we have enough space */
+ _bt_stepright(rel, insertstate, stack);
+ /* Update local state after stepping right */
+ page = BufferGetPage(insertstate->buf);
+ lpageop = (BTPageOpaque) PageGetSpecialPointer(page);
+ }
}
/*
- * Nope, so check conditions (b) and (c) enumerated above
+ * If the target page is full, see if we can obtain enough space by
+ * erasing LP_DEAD items
+ */
+ if (PageGetFreeSpace(page) < insertstate->itemsz &&
+ P_HAS_GARBAGE(lpageop))
+ {
+ _bt_vacuum_one_page(rel, insertstate->buf, heapRel);
+ insertstate->bounds_valid = false;
+ }
+ }
+ else
+ {
+ /*----------
+ * This is a !heapkeyspace (version 2 or 3) index. The current page
+ * is the first page that we could insert the new tuple to, but there
+ * may be other pages to the right that we could opt to use instead.
*
- * The earlier _bt_check_unique() call may well have established a
- * strict upper bound on the offset for the new item. If it's not the
- * last item of the page (i.e. if there is at least one tuple on the
- * page that's greater than the tuple we're inserting to) then we know
- * that the tuple belongs on this page. We can skip the high key
- * check.
+ * If the new key is equal to one or more existing keys, we can
+ * legitimately place it anywhere in the series of equal keys. In
+ * fact, if the new key is equal to the page's "high key" we can place
+ * it on the next page. If it is equal to the high key, and there's
+ * not room to insert the new tuple on the current page without
+ * splitting, then we move right hoping to find more free space and
+ * avoid a split.
+ *
+ * Keep scanning right until we
+ * (a) find a page with enough free space,
+ * (b) reach the last page where the tuple can legally go, or
+ * (c) get tired of searching.
+ * (c) is not flippant; it is important because if there are many
+ * pages' worth of equal keys, it's better to split one of the early
+ * pages than to scan all the way to the end of the run of equal keys
+ * on every insert. We implement "get tired" as a random choice,
+ * since stopping after scanning a fixed number of pages wouldn't work
+ * well (we'd never reach the right-hand side of previously split
+ * pages). The probability of moving right is set at 0.99, which may
+ * seem too high to change the behavior much, but it does an excellent
+ * job of preventing O(N^2) behavior with many equal keys.
+ *----------
*/
- if (insertstate->bounds_valid &&
- insertstate->low <= insertstate->stricthigh &&
- insertstate->stricthigh <= PageGetMaxOffsetNumber(page))
- break;
+ while (PageGetFreeSpace(page) < insertstate->itemsz)
+ {
+ /*
+ * Before considering moving right, see if we can obtain enough
+ * space by erasing LP_DEAD items
+ */
+ if (P_HAS_GARBAGE(lpageop))
+ {
+ _bt_vacuum_one_page(rel, insertstate->buf, heapRel);
+ insertstate->bounds_valid = false;
- if (P_RIGHTMOST(lpageop) ||
- _bt_compare(rel, itup_key, page, P_HIKEY) != 0 ||
- random() <= (MAX_RANDOM_VALUE / 100))
- break;
+ if (PageGetFreeSpace(page) >= insertstate->itemsz)
+ break; /* OK, now we have enough space */
+ }
- _bt_stepright(rel, insertstate, stack);
- /* Update local state after stepping right */
- page = BufferGetPage(insertstate->buf);
- lpageop = (BTPageOpaque) PageGetSpecialPointer(page);
+ /*
+ * Nope, so check conditions (b) and (c) enumerated above
+ *
+ * The earlier _bt_check_unique() call may well have established a
+ * strict upper bound on the offset for the new item. If it's not
+ * the last item of the page (i.e. if there is at least one tuple
+ * on the page that's greater than the tuple we're inserting to)
+ * then we know that the tuple belongs on this page. We can skip
+ * the high key check.
+ */
+ if (insertstate->bounds_valid &&
+ insertstate->low <= insertstate->stricthigh &&
+ insertstate->stricthigh <= PageGetMaxOffsetNumber(page))
+ break;
+
+ if (P_RIGHTMOST(lpageop) ||
+ _bt_compare(rel, itup_key, page, P_HIKEY) != 0 ||
+ random() <= (MAX_RANDOM_VALUE / 100))
+ break;
+
+ _bt_stepright(rel, insertstate, stack);
+ /* Update local state after stepping right */
+ page = BufferGetPage(insertstate->buf);
+ lpageop = (BTPageOpaque) PageGetSpecialPointer(page);
+ }
}
/*
* else someone else's _bt_check_unique scan could fail to see our insertion.
* Write locks on intermediate dead pages won't do because we don't know when
* they will get de-linked from the tree.
+ *
+ * This is more aggressive than it needs to be for non-unique !heapkeyspace
+ * indexes.
*/
static void
_bt_stepright(Relation rel, BTInsertState insertstate, BTStack stack)
*
* This recursive procedure does the following things:
*
- * + if necessary, splits the target page (making sure that the
- * split is equitable as far as post-insert free space goes).
+ * + if necessary, splits the target page, using 'itup_key' for
+ * suffix truncation on leaf pages (caller passes NULL for
+ * non-leaf pages).
* + inserts the tuple.
* + if the page was split, pops the parent stack, and finds the
* right place to insert the new child pointer (by walking
*/
static void
_bt_insertonpg(Relation rel,
+ BTScanInsert itup_key,
Buffer buf,
Buffer cbuf,
BTStack stack,
BTreeTupleGetNAtts(itup, rel) ==
IndexRelationGetNumberOfAttributes(rel));
Assert(P_ISLEAF(lpageop) ||
- BTreeTupleGetNAtts(itup, rel) ==
+ BTreeTupleGetNAtts(itup, rel) <=
IndexRelationGetNumberOfKeyAttributes(rel));
/* The caller should've finished any incomplete splits already. */
&newitemonleft);
/* split the buffer into left and right halves */
- rbuf = _bt_split(rel, buf, cbuf, firstright,
- newitemoff, itemsz, itup, newitemonleft);
+ rbuf = _bt_split(rel, itup_key, buf, cbuf, firstright, newitemoff,
+ itemsz, itup, newitemonleft);
PredicateLockPageSplit(rel,
BufferGetBlockNumber(buf),
BufferGetBlockNumber(rbuf));
if (BufferIsValid(metabuf))
{
/* upgrade meta-page if needed */
- if (metad->btm_version < BTREE_VERSION)
+ if (metad->btm_version < BTREE_NOVAC_VERSION)
_bt_upgrademetapage(metapg);
metad->btm_fastroot = itup_blkno;
metad->btm_fastlevel = lpageop->btpo.level;
if (BufferIsValid(metabuf))
{
+ Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
+ xlmeta.version = metad->btm_version;
xlmeta.root = metad->btm_root;
xlmeta.level = metad->btm_level;
xlmeta.fastroot = metad->btm_fastroot;
* new right page. newitemoff etc. tell us about the new item that
* must be inserted along with the data from the old page.
*
- * When splitting a non-leaf page, 'cbuf' is the left-sibling of the
- * page we're inserting the downlink for. This function will clear the
- * INCOMPLETE_SPLIT flag on it, and release the buffer.
+ * itup_key is used for suffix truncation on leaf pages (internal
+ * page callers pass NULL). When splitting a non-leaf page, 'cbuf'
+ * is the left-sibling of the page we're inserting the downlink for.
+ * This function will clear the INCOMPLETE_SPLIT flag on it, and
+ * release the buffer.
*
* Returns the new right sibling of buf, pinned and write-locked.
* The pin and lock on buf are maintained.
*/
static Buffer
-_bt_split(Relation rel, Buffer buf, Buffer cbuf, OffsetNumber firstright,
- OffsetNumber newitemoff, Size newitemsz, IndexTuple newitem,
- bool newitemonleft)
+_bt_split(Relation rel, BTScanInsert itup_key, Buffer buf, Buffer cbuf,
+ OffsetNumber firstright, OffsetNumber newitemoff, Size newitemsz,
+ IndexTuple newitem, bool newitemonleft)
{
Buffer rbuf;
Page origpage;
itemid = PageGetItemId(origpage, P_HIKEY);
itemsz = ItemIdGetLength(itemid);
item = (IndexTuple) PageGetItem(origpage, itemid);
- Assert(BTreeTupleGetNAtts(item, rel) == indnkeyatts);
+ Assert(BTreeTupleGetNAtts(item, rel) > 0);
+ Assert(BTreeTupleGetNAtts(item, rel) <= indnkeyatts);
if (PageAddItem(rightpage, (Item) item, itemsz, rightoff,
false, false) == InvalidOffsetNumber)
{
/*
* The "high key" for the new left page will be the first key that's going
- * to go into the new right page. This might be either the existing data
- * item at position firstright, or the incoming tuple.
+ * to go into the new right page, or possibly a truncated version if this
+ * is a leaf page split. This might be either the existing data item at
+ * position firstright, or the incoming tuple.
+ *
+ * The high key for the left page is formed using the first item on the
+ * right page, which may seem to be contrary to Lehman & Yao's approach of
+ * using the left page's last item as its new high key when splitting on
+ * the leaf level. It isn't, though: suffix truncation will leave the
+ * left page's high key fully equal to the last item on the left page when
+ * two tuples with equal key values (excluding heap TID) enclose the split
+ * point. It isn't actually necessary for a new leaf high key to be equal
+ * to the last item on the left for the L&Y "subtree" invariant to hold.
+ * It's sufficient to make sure that the new leaf high key is strictly
+ * less than the first item on the right leaf page, and greater than or
+ * equal to (not necessarily equal to) the last item on the left leaf
+ * page.
+ *
+ * In other words, when suffix truncation isn't possible, L&Y's exact
+ * approach to leaf splits is taken. (Actually, even that is slightly
+ * inaccurate. A tuple with all the keys from firstright but the heap TID
+ * from lastleft will be used as the new high key, since the last left
+ * tuple could be physically larger despite being opclass-equal in respect
+ * of all attributes prior to the heap TID attribute.)
*/
leftoff = P_HIKEY;
if (!newitemonleft && newitemoff == firstright)
}
/*
- * Truncate non-key (INCLUDE) attributes of the high key item before
- * inserting it on the left page. This only needs to happen at the leaf
+ * Truncate unneeded key and non-key attributes of the high key item
+ * before inserting it on the left page. This can only happen at the leaf
* level, since in general all pivot tuple values originate from leaf
- * level high keys. This isn't just about avoiding unnecessary work,
- * though; truncating unneeded key attributes (more aggressive suffix
- * truncation) can only be performed at the leaf level anyway. This is
- * because a pivot tuple in a grandparent page must guide a search not
- * only to the correct parent page, but also to the correct leaf page.
+ * level high keys. A pivot tuple in a grandparent page must guide a
+ * search not only to the correct parent page, but also to the correct
+ * leaf page.
*/
- if (indnatts != indnkeyatts && isleaf)
+ if (isleaf && (itup_key->heapkeyspace || indnatts != indnkeyatts))
{
- lefthikey = _bt_nonkey_truncate(rel, item);
+ IndexTuple lastleft;
+
+ /*
+ * Determine which tuple will become the last on the left page. This
+ * is needed to decide how many attributes from the first item on the
+ * right page must remain in new high key for left page.
+ */
+ if (newitemonleft && newitemoff == firstright)
+ {
+ /* incoming tuple will become last on left page */
+ lastleft = newitem;
+ }
+ else
+ {
+ OffsetNumber lastleftoff;
+
+ /* item just before firstright will become last on left page */
+ lastleftoff = OffsetNumberPrev(firstright);
+ Assert(lastleftoff >= P_FIRSTDATAKEY(oopaque));
+ itemid = PageGetItemId(origpage, lastleftoff);
+ lastleft = (IndexTuple) PageGetItem(origpage, itemid);
+ }
+
+ Assert(lastleft != item);
+ lefthikey = _bt_truncate(rel, lastleft, item, itup_key);
itemsz = IndexTupleSize(lefthikey);
itemsz = MAXALIGN(itemsz);
}
else
lefthikey = item;
- Assert(BTreeTupleGetNAtts(lefthikey, rel) == indnkeyatts);
+ Assert(BTreeTupleGetNAtts(lefthikey, rel) > 0);
+ Assert(BTreeTupleGetNAtts(lefthikey, rel) <= indnkeyatts);
if (PageAddItem(leftpage, (Item) lefthikey, itemsz, leftoff,
false, false) == InvalidOffsetNumber)
{
xl_btree_split xlrec;
uint8 xlinfo;
XLogRecPtr recptr;
- bool loglhikey = false;
xlrec.level = ropaque->btpo.level;
xlrec.firstright = firstright;
if (newitemonleft)
XLogRegisterBufData(0, (char *) newitem, MAXALIGN(newitemsz));
- /* Log left page */
- if (!isleaf || indnatts != indnkeyatts)
- {
- /*
- * We must also log the left page's high key. There are two
- * reasons for that: right page's leftmost key is suppressed on
- * non-leaf levels and in covering indexes included columns are
- * truncated from high keys. Show it as belonging to the left
- * page buffer, so that it is not stored if XLogInsert decides it
- * needs a full-page image of the left page.
- */
- itemid = PageGetItemId(origpage, P_HIKEY);
- item = (IndexTuple) PageGetItem(origpage, itemid);
- XLogRegisterBufData(0, (char *) item, MAXALIGN(IndexTupleSize(item)));
- loglhikey = true;
- }
+ /* Log the left page's new high key */
+ itemid = PageGetItemId(origpage, P_HIKEY);
+ item = (IndexTuple) PageGetItem(origpage, itemid);
+ XLogRegisterBufData(0, (char *) item, MAXALIGN(IndexTupleSize(item)));
/*
* Log the contents of the right page in the format understood by
(char *) rightpage + ((PageHeader) rightpage)->pd_upper,
((PageHeader) rightpage)->pd_special - ((PageHeader) rightpage)->pd_upper);
- xlinfo = newitemonleft ?
- (loglhikey ? XLOG_BTREE_SPLIT_L_HIGHKEY : XLOG_BTREE_SPLIT_L) :
- (loglhikey ? XLOG_BTREE_SPLIT_R_HIGHKEY : XLOG_BTREE_SPLIT_R);
+ xlinfo = newitemonleft ? XLOG_BTREE_SPLIT_L : XLOG_BTREE_SPLIT_R;
recptr = XLogInsert(RM_BTREE_ID, xlinfo);
PageSetLSN(origpage, recptr);
_bt_relbuf(rel, pbuf);
}
- /* get high key from left page == lower bound for new right page */
+ /* get high key from left, a strict lower bound for new right page */
ritem = (IndexTuple) PageGetItem(page,
PageGetItemId(page, P_HIKEY));
RelationGetRelationName(rel), bknum, rbknum);
/* Recursively update the parent */
- _bt_insertonpg(rel, pbuf, buf, stack->bts_parent,
+ _bt_insertonpg(rel, NULL, pbuf, buf, stack->bts_parent,
new_item, stack->bts_offset + 1,
is_only);
START_CRIT_SECTION();
/* upgrade metapage if needed */
- if (metad->btm_version < BTREE_VERSION)
+ if (metad->btm_version < BTREE_NOVAC_VERSION)
_bt_upgrademetapage(metapg);
/* set btree special data */
/*
* insert the right page pointer into the new root page.
*/
- Assert(BTreeTupleGetNAtts(right_item, rel) ==
+ Assert(BTreeTupleGetNAtts(right_item, rel) > 0);
+ Assert(BTreeTupleGetNAtts(right_item, rel) <=
IndexRelationGetNumberOfKeyAttributes(rel));
if (PageAddItem(rootpage, (Item) right_item, right_item_sz, P_FIRSTKEY,
false, false) == InvalidOffsetNumber)
XLogRegisterBuffer(1, lbuf, REGBUF_STANDARD);
XLogRegisterBuffer(2, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD);
+ Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
+ md.version = metad->btm_version;
md.root = rootblknum;
md.level = metad->btm_level;
md.fastroot = rootblknum;
{
trunctuple = *itup;
trunctuple.t_info = sizeof(IndexTupleData);
+ /* Deliberately zero INDEX_ALT_TID_MASK bits */
BTreeTupleSetNAtts(&trunctuple, 0);
itup = &trunctuple;
itemsize = sizeof(IndexTupleData);
/*
* _bt_isequal - used in _bt_doinsert in check for duplicates.
*
- * This is very similar to _bt_compare, except for NULL handling.
- * Rule is simple: NOT_NULL not equal NULL, NULL not equal NULL too.
+ * This is very similar to _bt_compare, except for NULL and negative infinity
+ * handling. Rule is simple: NOT_NULL not equal NULL, NULL not equal NULL too.
*/
static bool
_bt_isequal(TupleDesc itupdesc, BTScanInsert itup_key, Page page,
/* Better be comparing to a non-pivot item */
Assert(P_ISLEAF((BTPageOpaque) PageGetSpecialPointer(page)));
Assert(offnum >= P_FIRSTDATAKEY((BTPageOpaque) PageGetSpecialPointer(page)));
+ Assert(itup_key->scantid == NULL);
scankey = itup_key->scankeys;
itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
#include "storage/predicate.h"
#include "utils/snapmgr.h"
-static void _bt_cachemetadata(Relation rel, BTMetaPageData *metad);
+static void _bt_cachemetadata(Relation rel, BTMetaPageData *input);
+static BTMetaPageData *_bt_getmeta(Relation rel, Buffer metabuf);
static bool _bt_mark_page_halfdead(Relation rel, Buffer buf, BTStack stack);
static bool _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf,
bool *rightsib_empty);
}
/*
- * _bt_upgrademetapage() -- Upgrade a meta-page from an old format to the new.
+ * _bt_upgrademetapage() -- Upgrade a meta-page from an old format to version
+ * 3, the last version that can be updated without broadly affecting
+ * on-disk compatibility. (A REINDEX is required to upgrade to v4.)
*
* This routine does purely in-memory image upgrade. Caller is
* responsible for locking, WAL-logging etc.
/* It must be really a meta page of upgradable version */
Assert(metaopaque->btpo_flags & BTP_META);
- Assert(metad->btm_version < BTREE_VERSION);
+ Assert(metad->btm_version < BTREE_NOVAC_VERSION);
Assert(metad->btm_version >= BTREE_MIN_VERSION);
/* Set version number and fill extra fields added into version 3 */
- metad->btm_version = BTREE_VERSION;
+ metad->btm_version = BTREE_NOVAC_VERSION;
metad->btm_oldest_btpo_xact = InvalidTransactionId;
metad->btm_last_cleanup_num_heap_tuples = -1.0;
}
/*
- * Cache metadata from meta page to rel->rd_amcache.
+ * Cache metadata from input meta page to rel->rd_amcache.
*/
static void
-_bt_cachemetadata(Relation rel, BTMetaPageData *metad)
+_bt_cachemetadata(Relation rel, BTMetaPageData *input)
{
+ BTMetaPageData *cached_metad;
+
/* We assume rel->rd_amcache was already freed by caller */
Assert(rel->rd_amcache == NULL);
rel->rd_amcache = MemoryContextAlloc(rel->rd_indexcxt,
sizeof(BTMetaPageData));
- /*
- * Meta page should be of supported version (should be already checked by
- * caller).
- */
- Assert(metad->btm_version >= BTREE_MIN_VERSION &&
- metad->btm_version <= BTREE_VERSION);
+ /* Meta page should be of supported version */
+ Assert(input->btm_version >= BTREE_MIN_VERSION &&
+ input->btm_version <= BTREE_VERSION);
- if (metad->btm_version == BTREE_VERSION)
+ cached_metad = (BTMetaPageData *) rel->rd_amcache;
+ if (input->btm_version >= BTREE_NOVAC_VERSION)
{
- /* Last version of meta-data, no need to upgrade */
- memcpy(rel->rd_amcache, metad, sizeof(BTMetaPageData));
+ /* Version with compatible meta-data, no need to upgrade */
+ memcpy(cached_metad, input, sizeof(BTMetaPageData));
}
else
{
- BTMetaPageData *cached_metad = (BTMetaPageData *) rel->rd_amcache;
-
/*
* Upgrade meta-data: copy available information from meta-page and
* fill new fields with default values.
+ *
+ * Note that we cannot upgrade to version 4+ without a REINDEX, since
+ * extensive on-disk changes are required.
*/
- memcpy(rel->rd_amcache, metad, offsetof(BTMetaPageData, btm_oldest_btpo_xact));
- cached_metad->btm_version = BTREE_VERSION;
+ memcpy(cached_metad, input, offsetof(BTMetaPageData, btm_oldest_btpo_xact));
+ cached_metad->btm_version = BTREE_NOVAC_VERSION;
cached_metad->btm_oldest_btpo_xact = InvalidTransactionId;
cached_metad->btm_last_cleanup_num_heap_tuples = -1.0;
}
}
+/*
+ * Get metadata from share-locked buffer containing metapage, while performing
+ * standard sanity checks. Sanity checks here must match _bt_getroot().
+ */
+static BTMetaPageData *
+_bt_getmeta(Relation rel, Buffer metabuf)
+{
+ Page metapg;
+ BTPageOpaque metaopaque;
+ BTMetaPageData *metad;
+
+ metapg = BufferGetPage(metabuf);
+ metaopaque = (BTPageOpaque) PageGetSpecialPointer(metapg);
+ metad = BTPageGetMeta(metapg);
+
+ /* sanity-check the metapage */
+ if (!P_ISMETA(metaopaque) ||
+ metad->btm_magic != BTREE_MAGIC)
+ ereport(ERROR,
+ (errcode(ERRCODE_INDEX_CORRUPTED),
+ errmsg("index \"%s\" is not a btree",
+ RelationGetRelationName(rel))));
+
+ if (metad->btm_version < BTREE_MIN_VERSION ||
+ metad->btm_version > BTREE_VERSION)
+ ereport(ERROR,
+ (errcode(ERRCODE_INDEX_CORRUPTED),
+ errmsg("version mismatch in index \"%s\": file version %d, "
+ "current version %d, minimal supported version %d",
+ RelationGetRelationName(rel),
+ metad->btm_version, BTREE_VERSION, BTREE_MIN_VERSION)));
+
+ return metad;
+}
+
/*
* _bt_update_meta_cleanup_info() -- Update cleanup-related information in
* the metapage.
metad = BTPageGetMeta(metapg);
/* outdated version of metapage always needs rewrite */
- if (metad->btm_version < BTREE_VERSION)
+ if (metad->btm_version < BTREE_NOVAC_VERSION)
needsRewrite = true;
else if (metad->btm_oldest_btpo_xact != oldestBtpoXact ||
metad->btm_last_cleanup_num_heap_tuples != numHeapTuples)
START_CRIT_SECTION();
/* upgrade meta-page if needed */
- if (metad->btm_version < BTREE_VERSION)
+ if (metad->btm_version < BTREE_NOVAC_VERSION)
_bt_upgrademetapage(metapg);
/* update cleanup-related information */
XLogBeginInsert();
XLogRegisterBuffer(0, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD);
+ Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
+ md.version = metad->btm_version;
md.root = metad->btm_root;
md.level = metad->btm_level;
md.fastroot = metad->btm_fastroot;
START_CRIT_SECTION();
/* upgrade metapage if needed */
- if (metad->btm_version < BTREE_VERSION)
+ if (metad->btm_version < BTREE_NOVAC_VERSION)
_bt_upgrademetapage(metapg);
metad->btm_root = rootblkno;
XLogRegisterBuffer(0, rootbuf, REGBUF_WILL_INIT);
XLogRegisterBuffer(2, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD);
+ Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
+ md.version = metad->btm_version;
md.root = rootblkno;
md.level = 0;
md.fastroot = rootblkno;
{
BTMetaPageData *metad;
- /*
- * We can get what we need from the cached metapage data. If it's not
- * cached yet, load it. Sanity checks here must match _bt_getroot().
- */
if (rel->rd_amcache == NULL)
{
Buffer metabuf;
- Page metapg;
- BTPageOpaque metaopaque;
metabuf = _bt_getbuf(rel, BTREE_METAPAGE, BT_READ);
- metapg = BufferGetPage(metabuf);
- metaopaque = (BTPageOpaque) PageGetSpecialPointer(metapg);
- metad = BTPageGetMeta(metapg);
-
- /* sanity-check the metapage */
- if (!P_ISMETA(metaopaque) ||
- metad->btm_magic != BTREE_MAGIC)
- ereport(ERROR,
- (errcode(ERRCODE_INDEX_CORRUPTED),
- errmsg("index \"%s\" is not a btree",
- RelationGetRelationName(rel))));
-
- if (metad->btm_version < BTREE_MIN_VERSION ||
- metad->btm_version > BTREE_VERSION)
- ereport(ERROR,
- (errcode(ERRCODE_INDEX_CORRUPTED),
- errmsg("version mismatch in index \"%s\": file version %d, "
- "current version %d, minimal supported version %d",
- RelationGetRelationName(rel),
- metad->btm_version, BTREE_VERSION, BTREE_MIN_VERSION)));
+ metad = _bt_getmeta(rel, metabuf);
/*
* If there's no root page yet, _bt_getroot() doesn't expect a cache
* Cache the metapage data for next time
*/
_bt_cachemetadata(rel, metad);
-
+ /* We shouldn't have cached it if any of these fail */
+ Assert(metad->btm_magic == BTREE_MAGIC);
+ Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
+ Assert(metad->btm_fastroot != P_NONE);
_bt_relbuf(rel, metabuf);
}
+ /* Get cached page */
metad = (BTMetaPageData *) rel->rd_amcache;
- /* We shouldn't have cached it if any of these fail */
- Assert(metad->btm_magic == BTREE_MAGIC);
- Assert(metad->btm_version == BTREE_VERSION);
- Assert(metad->btm_fastroot != P_NONE);
return metad->btm_fastlevel;
}
+/*
+ * _bt_heapkeyspace() -- is heap TID being treated as a key?
+ *
+ * This is used to determine the rules that must be used to descend a
+ * btree. Version 4 indexes treat heap TID as a tiebreaker attribute.
+ * pg_upgrade'd version 3 indexes need extra steps to preserve reasonable
+ * performance when inserting a new BTScanInsert-wise duplicate tuple
+ * among many leaf pages already full of such duplicates.
+ */
+bool
+_bt_heapkeyspace(Relation rel)
+{
+ BTMetaPageData *metad;
+
+ if (rel->rd_amcache == NULL)
+ {
+ Buffer metabuf;
+
+ metabuf = _bt_getbuf(rel, BTREE_METAPAGE, BT_READ);
+ metad = _bt_getmeta(rel, metabuf);
+
+ /*
+ * If there's no root page yet, _bt_getroot() doesn't expect a cache
+ * to be made, so just stop here. (XXX perhaps _bt_getroot() should
+ * be changed to allow this case.)
+ */
+ if (metad->btm_root == P_NONE)
+ {
+ uint32 btm_version = metad->btm_version;
+
+ _bt_relbuf(rel, metabuf);
+ return btm_version > BTREE_NOVAC_VERSION;
+ }
+
+ /*
+ * Cache the metapage data for next time
+ */
+ _bt_cachemetadata(rel, metad);
+ /* We shouldn't have cached it if any of these fail */
+ Assert(metad->btm_magic == BTREE_MAGIC);
+ Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
+ Assert(metad->btm_fastroot != P_NONE);
+ _bt_relbuf(rel, metabuf);
+ }
+
+ /* Get cached page */
+ metad = (BTMetaPageData *) rel->rd_amcache;
+
+ return metad->btm_version > BTREE_NOVAC_VERSION;
+}
+
/*
* _bt_checkpage() -- Verify that a freshly-read page looks sane.
*/
* right sibling.
*
* "child" is the leaf page we wish to delete, and "stack" is a search stack
- * leading to it (approximately). Note that we will update the stack
- * entry(s) to reflect current downlink positions --- this is essentially the
- * same as the corresponding step of splitting, and is not expected to affect
- * caller. The caller should initialize *target and *rightsib to the leaf
- * page and its right sibling.
+ * leading to it (it actually leads to the leftmost leaf page with a high key
+ * matching that of the page to be deleted in !heapkeyspace indexes). Note
+ * that we will update the stack entry(s) to reflect current downlink
+ * positions --- this is essentially the same as the corresponding step of
+ * splitting, and is not expected to affect caller. The caller should
+ * initialize *target and *rightsib to the leaf page and its right sibling.
*
* Note: it's OK to release page locks on any internal pages between the leaf
* and *topparent, because a safe deletion can't become unsafe due to
BlockNumber leftsib;
/*
- * Locate the downlink of "child" in the parent (updating the stack entry
- * if needed)
+ * Locate the downlink of "child" in the parent, updating the stack entry
+ * if needed. This is how !heapkeyspace indexes deal with having
+ * non-unique high keys in leaf level pages. Even heapkeyspace indexes
+ * can have a stale stack due to insertions into the parent.
*/
stack->bts_btentry = child;
pbuf = _bt_getstackbuf(rel, stack);
{
/*
* We need an approximate pointer to the page's parent page. We
- * use the standard search mechanism to search for the page's high
- * key; this will give us a link to either the current parent or
- * someplace to its left (if there are multiple equal high keys).
+ * use a variant of the standard search mechanism to search for
+ * the page's high key; this will give us a link to either the
+ * current parent or someplace to its left (if there are multiple
+ * equal high keys, which is possible with !heapkeyspace indexes).
*
* Also check if this is the right-half of an incomplete split
* (see comment above).
/* we need an insertion scan key for the search, so build one */
itup_key = _bt_mkscankey(rel, targetkey);
- /* get stack to leaf page by searching index */
+ /* find the leftmost leaf page with matching pivot/high key */
+ itup_key->pivotsearch = true;
stack = _bt_search(rel, itup_key, &lbuf, BT_READ, NULL);
/* don't need a lock or second pin on the page */
_bt_relbuf(rel, lbuf);
if (BufferIsValid(metabuf))
{
/* upgrade metapage if needed */
- if (metad->btm_version < BTREE_VERSION)
+ if (metad->btm_version < BTREE_NOVAC_VERSION)
_bt_upgrademetapage(metapg);
metad->btm_fastroot = rightsib;
metad->btm_fastlevel = targetlevel;
{
XLogRegisterBuffer(4, metabuf, REGBUF_WILL_INIT | REGBUF_STANDARD);
+ Assert(metad->btm_version >= BTREE_NOVAC_VERSION);
+ xlmeta.version = metad->btm_version;
xlmeta.root = metad->btm_root;
xlmeta.level = metad->btm_level;
xlmeta.fastroot = metad->btm_fastroot;
metapg = BufferGetPage(metabuf);
metad = BTPageGetMeta(metapg);
- if (metad->btm_version < BTREE_VERSION)
+ if (metad->btm_version < BTREE_NOVAC_VERSION)
{
/*
* Do cleanup if metapage needs upgrade, because we don't have
* downlink (block) to uniquely identify the index entry, in case it
* moves right while we're working lower in the tree. See the paper
* by Lehman and Yao for how this is detected and handled. (We use the
- * child link to disambiguate duplicate keys in the index -- Lehman
- * and Yao disallow duplicate keys.)
+ * child link during the second half of a page split -- if caller ends
+ * up splitting the child it usually ends up inserting a new pivot
+ * tuple for child's new right sibling immediately after the original
+ * bts_offset offset recorded here. The downlink block will be needed
+ * to check if bts_offset remains the position of this same pivot
+ * tuple.)
*/
new_stack = (BTStack) palloc(sizeof(BTStackData));
new_stack->bts_blkno = par_blkno;
/*
* When nextkey = false (normal case): if the scan key that brought us to
* this page is > the high key stored on the page, then the page has split
- * and we need to move right. (If the scan key is equal to the high key,
- * we might or might not need to move right; have to scan the page first
- * anyway.)
+ * and we need to move right. (pg_upgrade'd !heapkeyspace indexes could
+ * have some duplicates to the right as well as the left, but that's
+ * something that's only ever dealt with on the leaf level, after
+ * _bt_search has found an initial leaf page.)
*
* When nextkey = true: move right if the scan key is >= page's high key.
+ * (Note that key.scantid cannot be set in this case.)
*
* The page could even have split more than once, so scan as far as
* needed.
int32 result,
cmpval;
+ /* Requesting nextkey semantics while using scantid seems nonsensical */
+ Assert(!key->nextkey || key->scantid == NULL);
+
page = BufferGetPage(buf);
opaque = (BTPageOpaque) PageGetSpecialPointer(page);
TupleDesc itupdesc = RelationGetDescr(rel);
&nbs