Bo Peng [Tue, 19 May 2020 09:32:10 +0000 (18:32 +0900)]
Prepare 4.0.9
Bo Peng [Mon, 18 May 2020 20:16:27 +0000 (05:16 +0900)]
Doc: Add release-note.
Tatsuo Ishii [Sat, 16 May 2020 00:05:56 +0000 (09:05 +0900)]
Remove more duplicate global data.
This causes a link error on some system (Fedra 32).
Discussion: https://www.pgpool.net/pipermail/pgpool-hackers/2020-April/003593.html
Muhammad Usama [Tue, 5 May 2020 19:30:33 +0000 (00:30 +0500)]
Fix:
0000608: pgpool ssl front end accept all ciphers. not working as expected.
SSL cipher_list needed to be set for server-side SSL context
instead for the client-side context.
Tatsuo Ishii [Sun, 26 Apr 2020 02:20:06 +0000 (11:20 +0900)]
Downgrade too verbose authentication logs.
When frontend authentication succeeds with clear text password and md5
authentication, message is out with log priority LOG. This floods
Pgpool-II log on busy systems. To fix this, downgrade the LOG message
to DEBUG1. Note that CERT auth related log level is still
LOG. Downgrading it causes regression 024 and 029 failure.
Discussion:
https://www.pgpool.net/mantisbt/view.php?id=606
https://www.pgpool.net/pipermail/pgpool-general/2020-April/007064.html
Tatsuo Ishii [Sun, 26 Apr 2020 01:22:22 +0000 (10:22 +0900)]
Remove duplicate global data.
This causes a link error on some system (Fedra 32).
Discussion: https://www.pgpool.net/pipermail/pgpool-hackers/2020-April/003593.html
Tatsuo Ishii [Thu, 23 Apr 2020 12:32:28 +0000 (21:32 +0900)]
Fix unnecessary checks.
Patch contributed by sherlockcpp.
Discussion: https://www.pgpool.net/pipermail/pgpool-general/2020-April/007062.html
Tatsuo Ishii [Thu, 23 Apr 2020 05:39:35 +0000 (14:39 +0900)]
Doc: fix typo.
Patch contributed by Umar Hayat.
Discussion: https://www.pgpool.net/pipermail/pgpool-hackers/2020-April/003587.html
Muhammad Usama [Thu, 16 Apr 2020 10:01:30 +0000 (15:01 +0500)]
Fix for segmentation fault in PCP processor: Bug 598
The crash was caused by an unconditionally pfree on buf without verifyingif it
was actually allocated or not. And freeing the but was not actually required at
the first places since the associated MemoryContext always gets resets anyway
after each loop iteration.
reference: https://www.pgpool.net/mantisbt/view.php?id=598
Tatsuo Ishii [Wed, 15 Apr 2020 04:16:15 +0000 (13:16 +0900)]
Fix a warning message is never output in watchdog.
Patch provided by sherlockcpp.
Discussion: https://www.pgpool.net/pipermail/pgpool-general/2020-April/007014.html
Muhammad Usama [Tue, 14 Apr 2020 12:19:16 +0000 (17:19 +0500)]
Fix for Coverity scan defect:
1424125 Resource leak
Tatsuo Ishii [Sun, 12 Apr 2020 09:15:05 +0000 (18:15 +0900)]
Doc: fix typo in failover.sgml.
Patch provided by sherlockcpp.
Tatsuo Ishii [Sat, 11 Apr 2020 05:34:40 +0000 (14:34 +0900)]
Fix pool show command code.
Enhance common function send_complete_and_ready() so that it uses
local variable in more sane way.
Also fix unnecessary limitation of number of rows affected by the
command.
Patch provided by sherlockcpp and Tatsuo Ishii.
Muhammad Usama [Tue, 7 Apr 2020 16:01:16 +0000 (21:01 +0500)]
Minor improvement in configure script
While checking for libpq the configure script was not honoring the LDFLAGS
settings.This sometimes breaks the configure when libraries are not present at
the defaultlocations# Please enter the commit message for your changes
Bo Peng [Sun, 5 Apr 2020 16:10:37 +0000 (01:10 +0900)]
Doc: fix document typos.
Patch is provided by sherlockcpp@foxmail.com.
Muhammad Usama [Mon, 6 Apr 2020 09:12:00 +0000 (14:12 +0500)]
Fix for [pgpool-general: 6951] Fix-condition-tok-is-always-true
Fix contributed by: sherlockcpp <sherlockcpp@foxmail.com>
Tatsuo Ishii [Tue, 31 Mar 2020 07:59:57 +0000 (16:59 +0900)]
Add regression test for previous commit.
Tatsuo Ishii [Tue, 31 Mar 2020 07:58:01 +0000 (16:58 +0900)]
Fix bug with query cache.
If an execute message is issued right after Sync message and query
cache hits, Pgpool-II hangs in inject_cached_message() because there's
no data to be read from backend. Fix is, change the code block which
checks the data availability in backend to the beginning of the loop
from the end of the loop. Also teach is_cache_empty() to call
pool_is_suspend_reading_from_frontend() so that if we suspend reading
from frontend. If so, return that cache is empty even if there's data
in frontend cache to receive a ready for query message from backend.
Tatsuo Ishii [Fri, 13 Mar 2020 01:36:17 +0000 (10:36 +0900)]
Fix problems in watchdog processing json data.
Comment on the patch from the author:
In the watchdog source code (src/watchdog/wd_json_data.c), there are some instances of bad handling of values read from json data.
For example:
1) The boolean pool configuration settings "load_balance_mode" and "master_slave_mode" are read using json_get_int_value_for_key(), resulting in 4-bytes being written into their location within the POOL_CONFIG, yet (being bool) they are only 1-byte long. This corrupts the values of the structure members following them.
2) Similarly, when parsing node function json data, "Flags" is read using json_get_int_value_for_key(), resulting in 4-bytes being written into an "unsigned char flags" variable on the stack, overwriting 3-bytes of stack memory following it. On a big-endian system (e.g. Solaris-sparc or Linux for IBM Z), this causes regression test "013.watchdog_failover_require_consensus" to fail, since 0 is written into Flags, rather than the intended value which is in the least significant byte of the int value written.
Bug reported in:
https://www.pgpool.net/mantisbt/view.php?id=596
Patch author:
Greg Nancarrow (Fujitsu Australia)
Tatsuo Ishii [Thu, 12 Mar 2020 06:49:35 +0000 (15:49 +0900)]
Fix SCRAM auth handling bug.
Comment on the patch from the author:
The code is currently checking if "len <= 8", but len is is
network-byte-order (big-endian). It is surely meant to be checking
"message_length" instead, which is "len" coverted to host-byte-order
(see previous line of code). Under (Intel) Linux, which is
little-endian, the value of "len" will be a large number and thus
render the current error condition check ineffective [for example, in
one case that I debugged, an example value of len was
134217728
(0x08000000), meaning that message_length was actually 8].
Additionally, it seems the "<=" check should actually be "<", based on
the length values that I see when debugging this code.
Bug reported in:
https://www.pgpool.net/mantisbt/view.php?id=595
Patch author:
Greg Nancarrow (Fujitsu Australia)
Tatsuo Ishii [Wed, 11 Mar 2020 04:05:43 +0000 (13:05 +0900)]
Fix possible data inconsistency in native replication mode.
There is a long standing bug with native replication mode. As reported
in pgpool-general, it is possible to lost sync of database if slave
DB's postgres process is killed. This is due to an oversight in
read_packets_and_process().
In replication mode if slave server's postgres is killed, then local
backend status is set to down.
*(my_backend_status[i]) = CON_DOWN;
So next DDL/DML in the same session is only issued to master node (and
other slave if there are multiple slave nodes). Of course this leads
to serious data inconsistency problem because in native replication
mode all DB nodes must receive DDL/DML at the same time.
Fix is triggering failover in thiscase.
Discussions:
https://www.pgpool.net/pipermail/pgpool-general/2020-March/006954.html
https://www.pgpool.net/pipermail/pgpool-hackers/2020-March/003540.html
pengbo@sraoss.co.jp [Thu, 5 Mar 2020 22:46:18 +0000 (07:46 +0900)]
Fix watchdog ping probes fail with long hostnames due to small buffer.
per 516.
Tatsuo Ishii [Wed, 26 Feb 2020 01:31:37 +0000 (10:31 +0900)]
Fix last status changed timestamp is not set.
If there's no status file or -D is specified when starting up
Pgpool-II, the last status changed timestamp was not set and
pcp_node_info command for example did not show the proper timestamp,
rather showed 1970/1/1, which is the Unix timestamp epoch.
Note that correct timestamp is set once clients connect to Pgpool-II
because at that time the status is changed from "waiting" to "up". So
the phenomena is only observed before any client connects to the
server.
Bo Peng [Sat, 22 Feb 2020 00:59:10 +0000 (09:59 +0900)]
Update copyright year.
Bo Peng [Wed, 19 Feb 2020 07:43:50 +0000 (16:43 +0900)]
Update pgpool.spec file.
Bo Peng [Wed, 19 Feb 2020 06:36:50 +0000 (15:36 +0900)]
Prepare 4.0.8.
Bo Peng [Wed, 19 Feb 2020 06:26:11 +0000 (15:26 +0900)]
Doc: Update doc version.
Bo Peng [Wed, 19 Feb 2020 06:19:53 +0000 (15:19 +0900)]
Doc: add release-notes.
Bo Peng [Wed, 19 Feb 2020 06:12:35 +0000 (15:12 +0900)]
Doc: update japanese documentation.
Muhammad Usama [Tue, 18 Feb 2020 12:23:10 +0000 (17:23 +0500)]
Disallowing the quorum aware failover option for the native replication mode.
In native replication mode, Pgpool-II is responsible for replicating the data
on all backend nodes, and if a node becomes quarantined then Pgpool-II stops
sending the writes to that node and when that quarantined node becomes available
again there is no way to figure out which portion of data was not replicated
during the quarantine period. This is dangerous since it can cause
data inconsistency.
So as per the discussion, we reached the conclusion to disallow failover require
consensus and quorum existence with the native replication mode so that backend
node should never get into quarantine when Pgpool-II is configured in the
replication mode.
Bo Peng [Tue, 18 Feb 2020 08:43:23 +0000 (17:43 +0900)]
Fix rewriting query errors in native replication mode.
per bug551.
Tatsuo Ishii [Tue, 18 Feb 2020 01:28:39 +0000 (10:28 +0900)]
Doc: clarify the session disconnection conditions while failover.
Bo Peng [Mon, 17 Feb 2020 13:14:53 +0000 (22:14 +0900)]
Doc: Fix documentation error regarding delegate_IP.
Bo Peng [Wed, 12 Feb 2020 01:03:13 +0000 (10:03 +0900)]
Fix child process segfault after reload if "health_check_database" is empty.
per bug571.
Tatsuo Ishii [Sat, 8 Feb 2020 22:37:57 +0000 (07:37 +0900)]
Fix merge error in the previous commit.
Tatsuo Ishii [Sat, 8 Feb 2020 10:39:14 +0000 (19:39 +0900)]
Fix oversight to adopt PostgreSQL 12.
From PostgreSQL 12 WAL dir is renamed to "pg_wal". pgpool_setup should
have dealt with this.
Takuma Hoshiai [Tue, 4 Feb 2020 03:06:15 +0000 (12:06 +0900)]
Fix error messsage typo
Bo Peng [Tue, 4 Feb 2020 00:04:05 +0000 (09:04 +0900)]
Doc: fix doc typo.
Bo Peng [Tue, 28 Jan 2020 06:20:46 +0000 (15:20 +0900)]
Feature: Check if socket file exists at startup and remove them if PID file doesn't exist.
If the previuos socket files exist, pgpool startup fails due to bind() failire.
This commit is to remove socket files at startup if PID file doesn't exist.
Also, I found some messages are effectively ignorede because log system is not yet initialized at start point.
I changed them to standard fprintf(stderr...).
Bo Peng [Tue, 28 Jan 2020 02:15:15 +0000 (11:15 +0900)]
Doc: change the follow_master_command %M %H paramater's order.
Tatsuo Ishii [Sun, 19 Jan 2020 11:36:48 +0000 (20:36 +0900)]
Fix occasional 008.dbredirect failure.
It seems "sleep 1" after reload is not enough. Let's sleep 10.
Tatsuo Ishii [Thu, 26 Dec 2019 05:13:08 +0000 (14:13 +0900)]
Fix occasional regression test failure of 008.dbredirect.
In the test pgbench -i is performed. It is possible that this bring
streaming replication delay, which in turn disturbs app/db
redirecting test. Fix is, disabling delay_threshold.
Tatsuo Ishii [Sat, 11 Jan 2020 11:48:40 +0000 (20:48 +0900)]
Fix 001.load_balance failure caused by previous commit.
Tatsuo Ishii [Fri, 10 Jan 2020 05:50:38 +0000 (14:50 +0900)]
Attempt to fix 001.load_balance test failure.
It is likely that sleep time after reload was not enough. Change them
from 1 second to 10 seconds.
Tatsuo Ishii [Sat, 4 Jan 2020 01:30:22 +0000 (10:30 +0900)]
More fix for occasional regression test 003.failover failure.
Comparison between show pool_node result and expected file failed
because of replication delay (the expected file set 0 to replication
delay column.) Fix is, disabling replication delay.
Tatsuo Ishii [Tue, 31 Dec 2019 03:06:50 +0000 (12:06 +0900)]
Fix occasional regression test 003.failover failure.
The test compares the result of show pool_nodes against pre-computed
expected text file. After primary node goes down and old standby
promotes to primary, it is possible that streaming replication worker
process stores the replication state (async, streaming) in shared
memory before the failover but fails to update the state after the
failover. Fix is, replace "async", "streaming" with space characters
in the result file to match the expected file.
Tatsuo Ishii [Tue, 24 Dec 2019 12:27:42 +0000 (21:27 +0900)]
Fix occasional regression test failure of 008.dbredirect.
In the test several databases are created and subsequent tests expect
that those databases have been already replicated. But it is possible
that the replication has not been done at that point and makes the
test fails. Fix is, add check the existence of database and if it's
not replicated, sleep for a while and retry it (up to 5 times).
Tatsuo Ishii [Thu, 12 Dec 2019 07:33:18 +0000 (16:33 +0900)]
Fix replication delay worker segfault when application_name is an empty string.
The process calls do_query() to obtain the query result against
pg_stat_replication_view. If user sets application_name to an empty
string, the result data row packet length will be 0. However
do_query() did not consider the length == 0 case, which resulted in
giving NULL pointer to strcmp() which is called from the worker
process. That means the bug is not specific to this case (a new
feature added in Pgpool-II 4.1) but it potentially affects many other
places where do_query() gets called, although it had not been reported
in the field. So this fix should be applied to all supported branches.
Per bug 565.
Takuma Hoshiai [Tue, 3 Dec 2019 01:53:09 +0000 (10:53 +0900)]
Fix a display of parameter when using PGPOOL SHOW.
If show backend_flag parameter by PGPOOL SHOW command, 'ALWAYS_MASTER' config
wasn't displayed by it.
Bo Peng [Thu, 21 Nov 2019 06:58:24 +0000 (15:58 +0900)]
Fix pgpool_setup failure.
Bo Peng [Mon, 18 Nov 2019 09:02:14 +0000 (18:02 +0900)]
Fix missing syslog setting.
Bo Peng [Mon, 18 Nov 2019 01:06:49 +0000 (10:06 +0900)]
Prepare 4.0.7-2.
Bo Peng [Fri, 15 Nov 2019 07:39:50 +0000 (16:39 +0900)]
Fix the missing syslog in configure file.
per bug557.
Bo Peng [Thu, 31 Oct 2019 00:59:55 +0000 (09:59 +0900)]
Doc: udpate release-note.
Bo Peng [Thu, 31 Oct 2019 00:30:50 +0000 (09:30 +0900)]
Prepare 4.0.7.
Bo Peng [Thu, 31 Oct 2019 00:23:21 +0000 (09:23 +0900)]
Doc: update doc version.
Bo Peng [Thu, 31 Oct 2019 00:11:03 +0000 (09:11 +0900)]
Fix incorrect query rewrite in replication mode.
Bo Peng [Wed, 30 Oct 2019 23:56:15 +0000 (08:56 +0900)]
Doc: Add release-note.
Bo Peng [Wed, 30 Oct 2019 09:10:04 +0000 (18:10 +0900)]
Add RHEL 8 support.
Bo Peng [Fri, 25 Oct 2019 08:22:22 +0000 (17:22 +0900)]
Fix incorrect query rewrite in replication mode.
For example:
- CREATE TABLE t1 AS SELECT now();
- SELECT now() INTO t1;
- WITH ins AS ( INSERT INTO t1 SELECT now()) SELECT;
Tatsuo Ishii [Mon, 21 Oct 2019 04:44:37 +0000 (13:44 +0900)]
Fix health check time out.
Heal check time out could happen in several places:
1) connect system call
2) select system call
3) read system call
1) was ok but in 2) and 3) it was possible that it could go into
infinite loop in Pgpool-II 3.7 or later. This was due to a mistake
when health check process was modified to be separate process in 3.7,
and back patched to 3.7 and above.
Discussion:
https://www.pgpool.net/pipermail/pgpool-hackers/2019-October/003458.html
https://www.pgpool.net/pipermail/pgpool-hackers/2019-October/003459.html
Takuma Hoshiai [Tue, 15 Oct 2019 09:37:18 +0000 (18:37 +0900)]
Doc: add failover_command description
Add more exaplain about a case that failover_command execute when it
does not have a new master node. In this case, special characters in
failover_command are specified fixed values.
Tatsuo Ishii [Tue, 15 Oct 2019 02:59:58 +0000 (11:59 +0900)]
Fix memory leaks pointed out by coverity.
Tatsuo Ishii [Fri, 11 Oct 2019 00:52:53 +0000 (09:52 +0900)]
Doc: add note to trusted_servers.
"Please note that you should not assign PostgreSQL servers to this
parameter." This should have been noted since there is at least one
user who actually did it.
Muhammad Usama [Mon, 14 Oct 2019 10:03:08 +0000 (15:03 +0500)]
Fix for miscellaneous watchdog issues.
The commit takes care of following reports in watchdog
-- pgpool-general: 6672 Query (watchdog split brain)
--
0000547: We need to do arping again after recovering from split-brain.
Basically in a bid to solve these issue, the commit makes blow mentioned
behavioral changes in watchdog
1-- If life check reports a watchdog node is dead but the watchdog core is still
able to connect and communicate with it without any problem, the watchdog core
was getting into the dilemma of whether to consider it as a lost or an alive node.
Fix:
With this commit, the lost nodes reported by life-check (external or internal)
are treated as hard failures even when the node is reachable from the watchdog
core. and that node can only become alive if it is restarted or secondly if
life-check informs that the node has become alive again.
The second type of node failures that can mark the watchdog node as lost, occurs
when a watchdog node fails to respond to the messages sent to it or frequent
errors occur on its socket. These errors are detected by the watchdog core
itself and get recovered automatically whenever that node becomes responsive
again. Apart from this the commit also made some improvements in the area of
detecting such internal errors.
2-- Standby nodes were too aggressive in reacting to the coup attempt
(when another watchdog node tries to become a master while valid master already
exists in the cluster) or to the possibility of split-brain (when it receives
"I AM MASTER" message from the node that is not the master as per the standby
node record). In both these situations, the standby nodes used to re-join
the master node in the hope to find the true master.
But that didn't prove to be a very good strategy, since it may lead to
influence the selection of true master node when the true-master and
fake-master nodes get into a fight to retain the master node status.
As one of the things true and fake master compares is the number of connected
standby nodes. And if during the fight the standby leaves the current master
to re-join the cluster the standby node count on the true master becomes
lower than actual and that affects its chances of winning the elections.
Fix:
The commit makes the standby nodes more lean-back and to actively reject the
nodes that are trying to become a master in the presence of the true-master
and avoid re-joining the cluster until it is absolutely necessary.
3-- The third problem was in the case of network partitioning or partial
life-check failure (when node A thinks node B is lost but node B thinks
node A is not lost) In this case the kicked out standby node was too
aggressive in trying to connect to the master or become a master itself
so that potentially put the unnecessary burden on the network and the
cluster nodes.
Fix:
So the fix for this issue is to make the isolated node a bit calm and wait
between trying to connect to master or become a master. For that purpose,a new
WD_NETWORK_ISOLATION state is added to the state machine, which just adds a
delay between successive tries of becoming a master. The node can only go into
this state when it comes to knowing that it is marked as lost on the current
master and gets out of it when it gets an indication that life-check on the
remote node has informed that node is alive again or secondly it gets out of
it after 10 seconds anyway.
Finally, Because creating the partial or complete network failures scenario is
very difficult so the commit also adds a watchdog debug aid similar to the
health-check debug to simulate different scenarios.
To enable the watchdog-debug aid the Pgpool-II needs to be compiled with
WATCHDOG_DEBUG=1" flag ($make WATCHDOG_DEBUG=1)
Once compiled with debug aid enabled, you can put the command in
pgpool_logdir->watchdog_debug_requests file to test different scenarios.
e.g.
$ echo "KILL_ALL_COMMUNICATION" > logdir/watchdog_debug_requests
$ echo "KILL_ALL_SENDERS" >> logdir/watchdog_debug_requests
The current list of commands that are supported by watchdog debug aid are
DO_NOT_REPLY_TO_BEACON <= Standby node stops replying master node beacon messages
while this line is present in the watchdog_debug_requests file
DO_NOT_SEND_BEACON <= Master node stops sending beacon messages to standby nodes
while this line is present in the watchdog_debug_requests file
KILL_ALL_COMMUNICATION <= Watchdog stops all communication with all nodes
while this line is present in the watchdog_debug_requests file
KILL_ALL_RECEIVERS <= Watchdog ignores messages of all node
while this line is present in the watchdog_debug_requests file
KILL_ALL_SENDERS <= Watchdog stops sending messages to all nodes
while this line is present in the watchdog_debug_requests file
Bo Peng [Thu, 10 Oct 2019 06:13:43 +0000 (15:13 +0900)]
Update changelog.
Bo Peng [Thu, 10 Oct 2019 05:52:49 +0000 (14:52 +0900)]
Update pgpool.spec to support PostgreSQL 12.
Bo Peng [Thu, 10 Oct 2019 04:56:51 +0000 (13:56 +0900)]
Update src/redhat/pgpool_socket_dir.patch.
Tatsuo Ishii [Thu, 10 Oct 2019 01:01:08 +0000 (10:01 +0900)]
Fix assorted ancient v2 protocol bugs.
- In v2 code path, extract_message() pfrees memory which was returned
from pool_read_string(). This is plain wrong and could cause sefault
since the memory returned by it is being managed by pool_stream
modules.
- In v2 code path pool_process_notice_message_from_one_backend() added
"NOTICE:" to the log message. This is not necessary as the part is
already included in the message.
- In v2 code path pool_extract_error_message() did not prepare unread
data correctly. This caused subsequent
pool_process_notice_message_from_one_backend() produces empty
message, read_kind_from_backend() failure.
Takuma Hoshiai [Tue, 8 Oct 2019 07:01:41 +0000 (16:01 +0900)]
Fix extended query communication in do_query()
do_query() didn't send Describe message to PostgreSQL.
It didn't use strcasecmp() but strcasecmp() when check if query is SELECT.
Takuma Hoshiai [Tue, 8 Oct 2019 04:07:32 +0000 (13:07 +0900)]
Fix problem that syslog_facility don't change by reload
The cause is macro definition mistake. This fix unify macro definition, and delete old test code to use vsyslog().
Reported in bug 548.
Tatsuo Ishii [Fri, 4 Oct 2019 04:52:19 +0000 (13:52 +0900)]
Fix inappropriate ereport call in VALID_BACKEND.
VALID_BACKEND (more precisely pool_virtual_master_db_node_id) macro
emitted message if pgpool is performing failover/failback:
ereport(WARNING,
(errmsg("failover/failback is in progress"),
errdetail("executing failover or failback on backend"),
errhint("In a moment you should be able to reconnect to the database")));
This could be called within signal handlers and
POOL_SETMASK(&BlockSig)/POOL_SETMASK(&UnBlockSig) was called to block
an interrupt because ereport is not reentrant. However it is possible
that callers have already called POOL_SETMASK, and this could result
unwanted signal unblock.
Fix is, removing ereport and POOL_SETAMASK all together. This results
in removing the message above but we have no choice.
I found the problem while investigating regression
055.backend_all_down failure but of course the bug could have bitten
users in other places.
Muhammad Usama [Thu, 3 Oct 2019 20:25:17 +0000 (01:25 +0500)]
Fix for Coverity warning '
1395047 Resource leak'
Muhammad Usama [Thu, 3 Oct 2019 15:33:22 +0000 (20:33 +0500)]
Fix for Coverity warnings in pool_auth
Muhammad Usama [Thu, 3 Oct 2019 14:53:44 +0000 (19:53 +0500)]
Fix for Coverity warnings in watchdog and lifecheck
Tatsuo Ishii [Thu, 3 Oct 2019 12:33:09 +0000 (21:33 +0900)]
Fix signal unblock leak in failover.
When failover event occurs, register_node_operation_request() gets
called to en-queue failover/failback requests. If the request queue is
full, this function returns false with unlocking semaphore. But it
forgot to unblock signal mask. This leads to block all signals
including SITERM, which makes pgpool fail to shutdown.
Discussion: https://www.pgpool.net/pipermail/pgpool-hackers/2019-October/003449.html
Muhammad Usama [Sat, 28 Sep 2019 19:51:07 +0000 (00:51 +0500)]
Fix for bug-545: Quorum lost and not recovered
Master watchdog node was not adding the lost standby node to its list of valid
standby nodes after it is rediscovered by the lifecheck.The fix is to ask the
node to rejoin the master node when it gets rediscovered by the lifecheck.
As part of this commit, I have also added the watchdog data version and Pgpool-II
version in the watchdog info packet to make the extensions in the watchdog
messages easier in the future.
Thanks to Guille(reporter of this bug), for providing lots of help in testing the fix
Tatsuo Ishii [Wed, 25 Sep 2019 05:22:21 +0000 (14:22 +0900)]
Fix memory leak in replication mode.
Per coverity.
Tatsuo Ishii [Tue, 24 Sep 2019 23:49:48 +0000 (08:49 +0900)]
Fix memory leak while attempting to connect to backend.
If no backend is up and running, memory for copy of startup packet
will be lost. This was brought by commit
cdb49d3b7. Per coverity.
Tatsuo Ishii [Tue, 24 Sep 2019 07:50:25 +0000 (16:50 +0900)]
Fix coverity warnings.
Tatsuo Ishii [Tue, 24 Sep 2019 06:26:45 +0000 (15:26 +0900)]
Fix coverity warnings.
Tatsuo Ishii [Tue, 6 Aug 2019 02:27:30 +0000 (11:27 +0900)]
Overhaul health check debug facility.
check_backend_down_request() in health_check.c is intended to simulate
the situation where communication failure between health check and
PostgreSQL backend node by creating a file containing lines:
1 down
where the first numeric is the node id starting from 0, tab, and
"down". When health check process finds the file, let health check
fails on node 1.
After health check brings the node into down status,
check_backend_down_request() change "down" to "already_down" to
prevent repeating node failure.
However, questions is, this is necessary at all. I think
check_backend_down_request() should keep on reporting the down status
and it should be called inside establish_persistent_connection() to
prevent repeating node failure because it could be better simulated
the failing situation in this way. For example, currently the health
check retry is not simulated but the new way can do it.
Moreover, in current watchdog implementation, to bring a node into
quarantine state requires *two" times of node communication error
detection. Since check_backend_down_request() only allows to raise
node down even *once" (after the down state is changed to already_down
state), it's impossible to test the watchdog quarantine using
check_backend_down_request(). I changed check_backend_down_request()
so that it continues to raise "down" event as long as the down request
file exists.
This commit enhances check_backend_down_request() as described above.
1) caller of check_backend_down_request() is
establish_persistent_connection(), rather than
do_health_check_child().
2) check_backend_down_request() does not change "down" to
"already_down" anymore. This means that the second argument of
check_backend_down_request() is not useful anymore. Probably I
should remove the argument later on.
Tatsuo Ishii [Wed, 18 Sep 2019 01:51:28 +0000 (10:51 +0900)]
Fix uninitialized variable.
Per Coverity.
Tatsuo Ishii [Tue, 17 Sep 2019 22:39:15 +0000 (07:39 +0900)]
Fix compiler warning.
Tatsuo Ishii [Tue, 17 Sep 2019 22:36:38 +0000 (07:36 +0900)]
Fix compiler warnings.
Tatsuo Ishii [Mon, 16 Sep 2019 22:17:55 +0000 (07:17 +0900)]
Revert "Fix occasional query hang while processing DEALLOCATE."
This reverts commit
83c1988d3c8bdd0ecbdf6a3d28371febee556483.
Tatsuo Ishii [Mon, 16 Sep 2019 00:24:08 +0000 (09:24 +0900)]
Fix occasional query hang while processing DEALLOCATE.
When DEALLOCATE tries to remove a named statement, it inherits
where_to_send map of the named statement in
where_to_send_deallocate(). However it forgot to copy the load balance
node id in the query context of the named statement. This made sending
query to backend not happen: if the target node id is different from
query_context->load_balance_node_id nor primary node id,
pool_virtual_master_db_node_id (it is called as MASTER_NODE_ID)
returns primary node id, and pool_send_and_wait(MASTER_NODE_ID)
ignores the request because VALID_BACKEND returns false in this case
(MASTER_NODE_ID = primary node id is not in the where_to_send map). As
a result, following check_error() waits for response from backend in
vain.
Fix is, let where_to_send_deallocate() copy load balance node id from
the query context of the previous named statement.
Per bug 546.
Tatsuo Ishii [Sun, 15 Sep 2019 13:39:18 +0000 (22:39 +0900)]
Fix segfault in certain case.
The scenario is something like:
1) a named statement is created.
2) DEALLOCATE removes it.
3) an erroneous query is executed.
In #2, "sent message" for the named statement is removed but
"uncompleted_message" is left. Then after #3, in ReadyForQuery()
uncompleted_message is added and removed. However, storage for the
uncompleted_message has been already freed in #2, and it causes a
segfault.
Fix is, in SimpleQuery() set NULL to uncompleted_message if it's not
PREPARE command so that ReadyForQuery() does not try to remove the
already removed message.
Per bug 546.
Here is a minimum test case.
'P' "_plan0x7f2d465db530" "SELECT 1" 0
'S'
'Y'
'Q' "DEALLOCATE _plan0x7f2d465db530"
'Y'
'Q' "CREATE INDEX users_auth_id_index ON non_existing_table ( auth_id )"
'Y'
'X'
Tatsuo Ishii [Thu, 12 Sep 2019 04:40:05 +0000 (13:40 +0900)]
Fix identical code used for different branches per Coverity.
Tatsuo Ishii [Thu, 12 Sep 2019 04:39:41 +0000 (13:39 +0900)]
Fix memory leak per Coverity.
Tatsuo Ishii [Tue, 10 Sep 2019 06:54:13 +0000 (15:54 +0900)]
Fix typo in fork_lifecheck_child().
Tatsuo Ishii [Tue, 10 Sep 2019 06:42:10 +0000 (15:42 +0900)]
Fix typo in fork_watchdog_child().
Tatsuo Ishii [Mon, 9 Sep 2019 02:37:24 +0000 (11:37 +0900)]
Doc: clarify that certificate authentication works between only client and Pgpool-II.
Per complaint from: https://www.pgpool.net/pipermail/pgpool-general-jp/2019-September/001611.html
Tatsuo Ishii [Fri, 6 Sep 2019 07:31:07 +0000 (16:31 +0900)]
Fix memory leak.
Per Coverity.
Tatsuo Ishii [Fri, 6 Sep 2019 06:54:39 +0000 (15:54 +0900)]
Fix memory leak.
Per Coverity.
Tatsuo Ishii [Fri, 6 Sep 2019 06:24:09 +0000 (15:24 +0900)]
Fix uninitialized variable.
Probably harmless but bug is bug...
Per Coverity.
Bo Peng [Thu, 5 Sep 2019 06:46:36 +0000 (15:46 +0900)]
Doc: update example documetaion.
Tatsuo Ishii [Tue, 3 Sep 2019 22:45:17 +0000 (07:45 +0900)]
Doc: mention that VIP will not be brougt up if quorum does not exist.
Tatsuo Ishii [Sun, 1 Sep 2019 02:38:35 +0000 (11:38 +0900)]
Fix pgpool_setup to reflect the -p (baseport) to ORIGBASEPORT variable.
Otherwise, shutdown generated script by pgpool_setup does not use
proper port number for netstat command.
Tatsuo Ishii [Wed, 28 Aug 2019 05:48:11 +0000 (14:48 +0900)]
Fix pgpool_setup to deal with PostgreSQL 9.1.
"---data-checksums" was unconditionally added to initdb's arg but
PostgreSQL 9.1's initdb does not have the option. To solve the issue,
internal variable $PGVERSION now represents "major version" * 100:
e.g. 120 for PostgreSQL 12.x (including 12beta), 91 for PostgreSQL
9.1.x, so that pgpool_setup can check if the option can be added to
initdb options.