<chapter id="example-configs">
<title>Configuration Examples</title>
- <sect1 id="example-basic">
- <title>Basic Configuration Example</title>
+ <sect1 id="example-basic">
+ <title>Basic Configuration Example</title>
+
+ <sect2 id="example-configs-begin">
+ <title>Let's Begin!</title>
+ <para>
+ First, we must learn how to install and configure <productname>Pgpool-II</productname> and database nodes before using replication.
+ </para>
+
+ <sect3 id="example-configs-begin-installing">
+ <title>Installing <productname>Pgpool-II</productname></title>
+ <para>
+ Installing <productname>Pgpool-II</productname> is very easy.
+ In the directory which you have extracted the source tar ball,
+ execute the following commands.
+ <programlisting>
+ $ ./configure
+ $ make
+ $ make install
+ </programlisting>
+ <command>configure</command> script collects your system information
+ and use it for the compilation procedure. You can pass command
+ line arguments to <command>configure</command> script to change the default behavior,
+ such as the installation directory. <productname>Pgpool-II</productname>
+ will be installed to <literal>/usr/local</literal> directory by default.
+ </para>
+ <para>
+ <command>make</command> command compiles the source code, and
+ <command>make install</command> will install the executables.
+ You must have write permission on the installation directory.
+ In this tutorial, we will install <productname>Pgpool-II
+ </productname> in the default <literal>/usr/local</literal> directory.
+ </para>
+ <note>
+ <para>
+ <productname>Pgpool-II</productname> requires <literal>libpq</literal>
+ library in <productname>PostgreSQL</productname> 7.4 or later (version 3 protocol).
+ </para>
+ </note>
+ <para>
+ If the <command>configure</command> script displays the following error message, the
+ <literal>libpq</literal> library may not be installed, or it is not of version 3
+ <programlisting>
+ configure: error: libpq is not installed or libpq is old
+ </programlisting>
+ If the library is version 3, but the above message is still displayed, your
+ <literal>libpq</literal> library is probably not recognized by the <command>
+ configure</command> script.
+ The <command>configure</command> script searches for <literal>libpq</literal>
+ library under <literal>/usr/local/pgsql</literal>. If you have installed the
+ <productname>PostgreSQL</productname> in a directory other than <literal>/usr/local/pgsql</literal>, use
+ <literal>--with-pgsql</literal>, or <literal>--with-pgsql-includedir</literal>
+ and <literal>--with-pgsql-libdir</literal> command line options when you
+ execute <command>configure</command>.
+ </para>
+ </sect3>
+
+ <sect3 id="example-configs-begin-config-files">
+ <title>Configuration Files</title>
+ <para>
+ <productname>Pgpool-II</productname> configuration parameters are saved in the
+ <literal>pgpool.conf</literal> file. The file is in <literal>"parameter = value"
+ </literal> per line format. When you install <productname>Pgpool-II</productname>,
+ <literal>pgpool.conf.sample</literal> is automatically created.
+ We recommend copying and renaming it to <literal>pgpool.conf</literal>, and edit
+ it as you like.
+ <programlisting>
+ $ cp /usr/local/etc/pgpool.conf.sample /usr/local/etc/pgpool.conf
+ </programlisting>
+ <productname>Pgpool-II</productname> only accepts connections from the localhost
+ using port 9999. If you wish to receive conenctions from other hosts,
+ set <xref linkend="guc-listen-addresses"> to <literal>'*'</literal>.
+ <programlisting>
+ listen_addresses = 'localhost'
+ port = 9999
+ </programlisting>
+ We will use the default parameters in thie tutorial.
+ </para>
+ </sect3>
+
+ <sect3 id="example-configs-begin-config-pcp">
+ <title>Configuring <acronym>PCP</acronym> Commands</title>
+ <para>
+ <productname>Pgpool-II</productname> has an interface for administrative
+ purpose to retrieve information on database nodes, shutdown
+ <productname>Pgpool-II</productname>, etc. via network. To use
+ <acronym>PCP</acronym> commands, user authentication is required.
+ This authentication is different from <productname>PostgreSQL</productname>'s user authentication.
+ A user name and password need to be defined in the <literal>pcp.conf</literal>
+ file. In the file, a user name and password are listed as a pair on each line,
+ and they are separated by a colon (:). Passwords are encrypted in
+ <literal>md5</literal> hash format.
+
+ <programlisting>
+ postgres:e8a48653851e28c69d0506508fb27fc5
+ </programlisting>
+
+ When you install <productname>Pgpool-II</productname>, <literal>pcp.conf.sample
+ </literal> is automatically created. We recommend copying and renaming it
+ to <literal>pcp.conf</literal>, and edit it.
+ <programlisting>
+ $ cp /usr/local/etc/pcp.conf.sample /usr/local/etc/pcp.conf
+ </programlisting>
+ To encrypt your password into md5 hash format, use the <command>pg_md5</command>
+ command, which is installed as one of <productname>Pgpool-II</productname>'s
+ executables. <command>pg_md5</command> takes text as a command line argument,
+ and displays its md5-hashed text.
+ For example, give <literal>"postgres"</literal> as the command line argument,
+ and <command>pg_md5</command> displays md5-hashed text on its standard output.
+ <programlisting>
+ $ /usr/bin/pg_md5 postgres
+ e8a48653851e28c69d0506508fb27fc5
+ </programlisting>
+ PCP commands are executed via network, so the port number must be configured
+ with <xref linkend="guc-pcp-port"> parameter in <literal>pgpool.conf</literal> file.
+ We will use the default 9898 for <xref linkend="guc-pcp-port"> in this tutorial.
+ <programlisting>
+ pcp_port = 9898
+ </programlisting>
+ </para>
+ </sect3>
+
+
+ <sect3 id="example-configs-prep-db-nodes">
+ <title>Preparing Database Nodes</title>
+ <para>
+ Now, we need to set up backend <productname>PostgreSQL</productname> servers for <productname>Pgpool-II
+ </productname>. These servers can be placed within the same host as
+ <productname>Pgpool-II</productname>, or on separate machines. If you decide
+ to place the servers on the same host, different port numbers must be assigned
+ for each server. If the servers are placed on separate machines,
+ they must be configured properly so that they can accept network
+ connections from <productname>Pgpool-II</productname>.
+
+ <programlisting>
+ backend_hostname0 = 'localhost'
+ backend_port0 = 5432
+ backend_weight0 = 1
+ backend_hostname1 = 'localhost'
+ backend_port1 = 5433
+ backend_weight1 = 1
+ backend_hostname2 = 'localhost'
+ backend_port2 = 5434
+ backend_weight2 = 1
+ </programlisting>
+
+ For <xref linkend="guc-backend-hostname">, <xref linkend="guc-backend-port">,
+ <xref linkend="guc-backend-weight">, set the node's hostname, port number,
+ and ratio for load balancing. At the end of each parameter string,
+ node ID must be specified by adding positive integers starting with 0 (i.e. 0, 1, 2..).
+ </para>
+ <note>
+ <para>
+ <xref linkend="guc-backend-weight"> parameters for all nodes are
+ set to 1, meaning that SELECT queries are equally distributed among
+ three servers.
+ </para>
+ </note>
+ </sect3>
+
+ <sect3 id="example-configs-start-stop-pgpool">
+ <title>Starting/Stopping <productname>Pgpool-II</productname></title>
+ <para>
+ To fire up <productname>Pgpool-II</productname>, execute the following
+ command on a terminal.
+
+ <programlisting>
+ $ pgpool
+ </programlisting>
+
+ The above command, however, prints no log messages because <productname>
+ Pgpool-II</productname> detaches the terminal. If you want to show
+ <productname>Pgpool-II</productname> log messages, you pass <literal>-n</literal>
+ option to <command>pgpool</command> command so <productname>Pgpool-II</productname>
+ is executed as non-daemon process, and the terminal will not be detached.
+ <programlisting>
+ $ pgpool -n &
+ </programlisting>
+
+ The log messages are printed on the terminal, so it is recommended to use the following options.
+ <programlisting>
+ $ pgpool -n -d > /tmp/pgpool.log 2>&1 &
+ </programlisting>
+
+ The <literal>-d</literal> option enables debug messages to be generated.
+ The above command keeps appending log messages to <literal>/tmp/pgpool.log
+ </literal>. If you need to rotate log files, pass the logs to a external
+ command which has log rotation function.
+ For example, you can use <ulink url="https://httpd.apache.org/docs/2.4/programs/rotatelogs.html">
+ <command>rotatelogs</command></ulink> from Apache2:
+ <programlisting>
+ $ pgpool -n 2>&1 | /usr/local/apache2/bin/rotatelogs \
+ -l -f /var/log/pgpool/pgpool.log.%A 86400 &
+ </programlisting>
+
+ This will generate a log file named <literal>"pgpool.log.Thursday"</literal>
+ then rotate it 00:00 at midnight. Rotatelogs adds logs to a file if it already
+ exists. To delete old log files before rotation, you could use cron:
+ <programlisting>
+ 55 23 * * * /usr/bin/find /var/log/pgpool -type f -mtime +5 -exec /bin/rm -f '{}' \;
+ </programlisting>
+
+ Please note that rotatelogs may exist as <literal>/usr/sbin/rotatelogs2</literal>
+ in some distributions. <literal>-f</literal> option generates a log file as soon as
+ <command>rotatelogs</command> starts and is available in apache2 2.2.9 or greater.
+ Also <ulink url="http://www.cronolog.org/">cronolog</ulink> can be used.
+ <programlisting>
+ $ pgpool -n 2>&1 | /usr/sbin/cronolog \
+ --hardlink=/var/log/pgsql/pgpool.log \
+ '/var/log/pgsql/%Y-%m-%d-pgpool.log' &
+ </programlisting>
+
+ To stop <productname>Pgpool-II</productname> execute the following command.
+ <programlisting>
+ $ pgpool stop
+ </programlisting>
+
+ If any client is still connected, <productname>Pgpool-II</productname>
+ waits for it to disconnect, and then terminates itself. Run the following
+ command instead if you want to shutdown <productname>Pgpool-II</productname>
+ forcibly.
+ <programlisting>
+ $ pgpool -m fast stop
+ </programlisting>
+
+ </para>
+ </sect3>
+ </sect2>
+
+ <sect2 id="example-configs-replication">
+ <title>Your First Replication</title>
+ <para>
+ Replication (see <xref linkend="runtime-config-replication-mode">) enables
+ the same data to be copied to multiple database nodes.
+ In this section, we'll use three database nodes, which we have already set
+ up in <xref linkend="example-configs-begin">, and takes you step by step to
+ create a database replication system.
+ Sample data to be replicated will be generated by the
+ <ulink url="https://www.postgresql.org/docs/current/static/pgbench.html">
+ <command>pgbench</command></ulink> benchmark program.
+ </para>
+
+ <sect3 id="example-configs-config-replication">
+ <title>Configuring Replication</title>
+ <para>
+ To enable the database replication function, set
+ <xref linkend="guc-replication-mode"> to on in <literal>pgpool.conf</literal> file.
+ <programlisting>
+ replication_mode = true
+ </programlisting>
+ When <xref linkend="guc-replication-mode"> is on, <productname>Pgpool-II</productname>
+ will send a copy of a received query to all the database nodes.
+ In addition, when <xref linkend="guc-load-balance-mode"> is set to true,
+ <productname>Pgpool-II</productname> will distribute <acronym>SELECT</acronym> queries
+ among the database nodes.
+ <programlisting>
+ load_balance_mode = true
+ </programlisting>
+ In this section, we will enable both <xref linkend="guc-replication-mode">
+ and <xref linkend="guc-load-balance-mode">.
+ </para>
+ </sect3>
+
+ <sect3 id="example-configs-checking-replication">
+ <title>Checking Replication</title>
+ <para>
+ To reflect the changes in <literal>pgpool.conf</literal>,
+ <productname>Pgpool-II</productname> must be restarted.
+ Please refer to "Starting/Stopping <productname>Pgpool-II</productname>"
+ <xref linkend="example-configs-start-stop-pgpool">.
+ After configuring <literal>pgpool.conf</literal> and restarting the
+ <productname>Pgpool-II</productname>, let's try the actual replication
+ and see if everything is working.
+ First, we need to create a database to be replicated. We will name it
+ <literal>"bench_replication"</literal>. This database needs to be created
+ on all the nodes. Use the
+ <ulink url="https://www.postgresql.org/docs/current/static/app-createdb.html">
+ <command>createdb</command></ulink> commands through
+ <productname>Pgpool-II</productname>, and the database will be created
+ on all the nodes.
+ <programlisting>
+ $ createdb -p 9999 bench_replication
+ </programlisting>
+ Then, we'll execute <ulink url="https://www.postgresql.org/docs/current/static/pgbench.html">
+ <command>pgbench</command></ulink> with <literal>-i</literal> option.
+ <literal>-i</literal> option initializes the database with pre-defined tables and data.
+ <programlisting>
+ $ pgbench -i -p 9999 bench_replication
+ </programlisting>
+ The following table is the summary of tables and data, which will be created by
+ <ulink url="https://www.postgresql.org/docs/current/static/pgbench.html">
+ <command>pgbench -i</command></ulink>. If, on all the nodes, the listed tables and
+ data are created, replication is working correctly.
+ </para>
+
+ <table id="example-configs-checking-replication-table">
+ <title>data summary</title>
+ <tgroup cols="2">
+ <thead>
+ <row>
+ <entry>Table Name</entry>
+ <entry>Number of Rows</entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry>branches</entry>
+ <entry>1</entry>
+ </row>
+
+ <row>
+ <entry>tellers</entry>
+ <entry>10</entry>
+ </row>
+
+ <row>
+ <entry>accounts</entry>
+ <entry>100000</entry>
+ </row>
+
+ <row>
+ <entry>history</entry>
+ <entry>0</entry>
+ </row>
+
+ </tbody>
+ </tgroup>
+ </table>
+
+ <para>
+ Let's use a simple shell script to check the above on all the nodes.
+ The following script will display the number of rows in branches,
+ tellers, accounts, and history tables on all the nodes (5432, 5433, 5434).
+ <programlisting>
+ $ for port in 5432 5433 5434; do
+ > echo $port
+ > for table_name in branches tellers accounts history; do
+ > echo $table_name
+ > psql -c "SELECT count(*) FROM $table_name" -p $port bench_replication
+ > done
+ > done
+ </programlisting>
+
+ </para>
+ </sect3>
+ </sect2>
+
+ </sect1>
+
+ <sect1 id="example-watchdog">
+ <title>Watchdog Configuration Example</title>
- <sect2 id="example-configs-begin">
- <title>Let's Begin!</title>
<para>
- First, we must learn how to install and configure <productname>Pgpool-II</> and database nodes before using replication.
+ This tutrial explains the simple way to try "Watchdog".
+ What you need is 2 Linux boxes on which <productname>
+ Pgpool-II</productname> is installed and a <productname>PostgreSQL</productname>
+ on the same machine or in the other one. It is enough
+ that 1 node for backend exists.
+ You can use watchdog with <productname>
+ Pgpool-II</productname> in any mode: replication mode,
+ master/slave mode and raw mode.
</para>
-
- <sect3 id="example-configs-begin-installing">
- <title>Installing <productname>Pgpool-II</productname></title>
- <para>
- Installing <productname>Pgpool-II</productname> is very easy.
- In the directory which you have extracted the source tar ball,
- execute the following commands.
- <programlisting>
-$ ./configure
-$ make
-$ make install
- </programlisting>
- <command>configure</command> script collects your system information
- and use it for the compilation procedure. You can pass command
- line arguments to <command>configure</> script to change the default behavior,
- such as the installation directory. <productname>Pgpool-II</productname>
- will be installed to <literal>/usr/local</literal> directory by default.
- </para>
- <para>
- <command>make</command> command compiles the source code, and
- <command>make install</command> will install the executables.
- You must have write permission on the installation directory.
- In this tutorial, we will install <productname>Pgpool-II
- </productname> in the default <literal>/usr/local</literal> directory.
- </para>
- <note>
- <para>
- <productname>Pgpool-II</productname> requires <literal>libpq</literal>
- library in <productname>PostgreSQL</> 7.4 or later (version 3 protocol).
- </para>
- </note>
- <para>
- If the <command>configure</command> script displays the following error message, the
- <literal>libpq</literal> library may not be installed, or it is not of version 3
- <programlisting>
-configure: error: libpq is not installed or libpq is old
- </programlisting>
- If the library is version 3, but the above message is still displayed, your
- <literal>libpq</literal> library is probably not recognized by the <command>
- configure</command> script.
- The <command>configure</command> script searches for <literal>libpq</literal>
- library under <literal>/usr/local/pgsql</literal>. If you have installed the
- <productname>PostgreSQL</> in a directory other than <literal>/usr/local/pgsql</literal>, use
- <literal>--with-pgsql</literal>, or <literal>--with-pgsql-includedir</literal>
- and <literal>--with-pgsql-libdir</literal> command line options when you
- execute <command>configure</command>.
- </para>
- </sect3>
-
- <sect3 id="example-configs-begin-config-files">
- <title>Configuration Files</title>
- <para>
- <productname>Pgpool-II</productname> configuration parameters are saved in the
- <literal>pgpool.conf</literal> file. The file is in <literal>"parameter = value"
- </literal> per line format. When you install <productname>Pgpool-II</productname>,
- <literal>pgpool.conf.sample</literal> is automatically created.
- We recommend copying and renaming it to <literal>pgpool.conf</literal>, and edit
- it as you like.
- <programlisting>
-$ cp /usr/local/etc/pgpool.conf.sample /usr/local/etc/pgpool.conf
- </programlisting>
- <productname>Pgpool-II</productname> only accepts connections from the localhost
- using port 9999. If you wish to receive conenctions from other hosts,
- set <xref linkend="guc-listen-addresses"> to <literal>'*'</literal>.
- <programlisting>
-listen_addresses = 'localhost'
-port = 9999
- </programlisting>
- We will use the default parameters in thie tutorial.
- </para>
- </sect3>
-
- <sect3 id="example-configs-begin-config-pcp">
- <title>Configuring <acronym>PCP</acronym> Commands</title>
- <para>
- <productname>Pgpool-II</productname> has an interface for administrative
- purpose to retrieve information on database nodes, shutdown
- <productname>Pgpool-II</productname>, etc. via network. To use
- <acronym>PCP</acronym> commands, user authentication is required.
- This authentication is different from <productname>PostgreSQL</>'s user authentication.
- A user name and password need to be defined in the <literal>pcp.conf</literal>
- file. In the file, a user name and password are listed as a pair on each line,
- and they are separated by a colon (:). Passwords are encrypted in
- <literal>md5</literal> hash format.
-
- <programlisting>
-postgres:e8a48653851e28c69d0506508fb27fc5
- </programlisting>
-
- When you install <productname>Pgpool-II</productname>, <literal>pcp.conf.sample
- </literal> is automatically created. We recommend copying and renaming it
- to <literal>pcp.conf</literal>, and edit it.
- <programlisting>
-$ cp /usr/local/etc/pcp.conf.sample /usr/local/etc/pcp.conf
- </programlisting>
- To encrypt your password into md5 hash format, use the <command>pg_md5</command>
- command, which is installed as one of <productname>Pgpool-II</productname>'s
- executables. <command>pg_md5</command> takes text as a command line argument,
- and displays its md5-hashed text.
- For example, give <literal>"postgres"</literal> as the command line argument,
- and <command>pg_md5</command> displays md5-hashed text on its standard output.
- <programlisting>
-$ /usr/bin/pg_md5 postgres
-e8a48653851e28c69d0506508fb27fc5
- </programlisting>
- PCP commands are executed via network, so the port number must be configured
- with <xref linkend="guc-pcp-port"> parameter in <literal>pgpool.conf</literal> file.
- We will use the default 9898 for <xref linkend="guc-pcp-port"> in this tutorial.
- <programlisting>
-pcp_port = 9898
- </programlisting>
- </para>
- </sect3>
-
-
- <sect3 id="example-configs-prep-db-nodes">
- <title>Preparing Database Nodes</title>
- <para>
- Now, we need to set up backend <productname>PostgreSQL</> servers for <productname>Pgpool-II
- </productname>. These servers can be placed within the same host as
- <productname>Pgpool-II</productname>, or on separate machines. If you decide
- to place the servers on the same host, different port numbers must be assigned
- for each server. If the servers are placed on separate machines,
- they must be configured properly so that they can accept network
- connections from <productname>Pgpool-II</productname>.
-
- <programlisting>
-backend_hostname0 = 'localhost'
-backend_port0 = 5432
-backend_weight0 = 1
-backend_hostname1 = 'localhost'
-backend_port1 = 5433
-backend_weight1 = 1
-backend_hostname2 = 'localhost'
-backend_port2 = 5434
-backend_weight2 = 1
- </programlisting>
-
- For <xref linkend="guc-backend-hostname">, <xref linkend="guc-backend-port">,
- <xref linkend="guc-backend-weight">, set the node's hostname, port number,
- and ratio for load balancing. At the end of each parameter string,
- node ID must be specified by adding positive integers starting with 0 (i.e. 0, 1, 2..).
- </para>
- <note>
- <para>
- <xref linkend="guc-backend-weight"> parameters for all nodes are
- set to 1, meaning that SELECT queries are equally distributed among
- three servers.
- </para>
- </note>
- </sect3>
-
- <sect3 id="example-configs-start-stop-pgpool">
- <title>Starting/Stopping <productname>Pgpool-II</productname></title>
- <para>
- To fire up <productname>Pgpool-II</productname>, execute the following
- command on a terminal.
-
- <programlisting>
-$ pgpool
- </programlisting>
-
- The above command, however, prints no log messages because <productname>
- Pgpool-II</productname> detaches the terminal. If you want to show
- <productname>Pgpool-II</productname> log messages, you pass <literal>-n</literal>
- option to <command>pgpool</command> command so <productname>Pgpool-II</productname>
- is executed as non-daemon process, and the terminal will not be detached.
- <programlisting>
-$ pgpool -n &
- </programlisting>
-
- The log messages are printed on the terminal, so it is recommended to use the following options.
- <programlisting>
-$ pgpool -n -d > /tmp/pgpool.log 2>&1 &
- </programlisting>
-
- The <literal>-d</literal> option enables debug messages to be generated.
- The above command keeps appending log messages to <literal>/tmp/pgpool.log
- </literal>. If you need to rotate log files, pass the logs to a external
- command which has log rotation function.
- For example, you can use <ulink url="https://httpd.apache.org/docs/2.4/programs/rotatelogs.html">
- <command>rotatelogs</command></ulink> from Apache2:
- <programlisting>
-$ pgpool -n 2>&1 | /usr/local/apache2/bin/rotatelogs \
- -l -f /var/log/pgpool/pgpool.log.%A 86400 &
- </programlisting>
-
- This will generate a log file named <literal>"pgpool.log.Thursday"</literal>
- then rotate it 00:00 at midnight. Rotatelogs adds logs to a file if it already
- exists. To delete old log files before rotation, you could use cron:
- <programlisting>
-55 23 * * * /usr/bin/find /var/log/pgpool -type f -mtime +5 -exec /bin/rm -f '{}' \;
- </programlisting>
-
- Please note that rotatelogs may exist as <literal>/usr/sbin/rotatelogs2</literal>
- in some distributions. <literal>-f</literal> option generates a log file as soon as
- <command>rotatelogs</command> starts and is available in apache2 2.2.9 or greater.
- Also <ulink url="http://www.cronolog.org/">cronolog</ulink> can be used.
- <programlisting>
-$ pgpool -n 2>&1 | /usr/sbin/cronolog \
- --hardlink=/var/log/pgsql/pgpool.log \
- '/var/log/pgsql/%Y-%m-%d-pgpool.log' &
- </programlisting>
-
- To stop <productname>Pgpool-II</> execute the following command.
- <programlisting>
-$ pgpool stop
- </programlisting>
-
- If any client is still connected, <productname>Pgpool-II</productname>
- waits for it to disconnect, and then terminates itself. Run the following
- command instead if you want to shutdown <productname>Pgpool-II</productname>
- forcibly.
- <programlisting>
-$ pgpool -m fast stop
- </programlisting>
-
- </para>
- </sect3>
- </sect2>
-
- <sect2 id="example-configs-replication">
- <title>Your First Replication</title>
<para>
- Replication (see <xref linkend="runtime-config-replication-mode">) enables
- the same data to be copied to multiple database nodes.
- In this section, we'll use three database nodes, which we have already set
- up in <xref linkend="example-configs-begin">, and takes you step by step to
- create a database replication system.
- Sample data to be replicated will be generated by the
- <ulink url="https://www.postgresql.org/docs/current/static/pgbench.html">
- <command>pgbench</command></ulink> benchmark program.
+ This example uses use "osspc16" as an Active node and
+ "osspc20" as a Standby node. "Someserver" means one of them.
</para>
- <sect3 id="example-configs-config-replication">
- <title>Configuring Replication</title>
- <para>
- To enable the database replication function, set
- <xref linkend="guc-replication-mode"> to on in <literal>pgpool.conf</literal> file.
- <programlisting>
-replication_mode = true
- </programlisting>
- When <xref linkend="guc-replication-mode"> is on, <productname>Pgpool-II</productname>
- will send a copy of a received query to all the database nodes.
- In addition, when <xref linkend="guc-load-balance-mode"> is set to true,
- <productname>Pgpool-II</productname> will distribute <acronym>SELECT</acronym> queries
- among the database nodes.
- <programlisting>
-load_balance_mode = true
- </programlisting>
- In this section, we will enable both <xref linkend="guc-replication-mode">
- and <xref linkend="guc-load-balance-mode">.
- </para>
- </sect3>
-
- <sect3 id="example-configs-checking-replication">
- <title>Checking Replication</title>
- <para>
- To reflect the changes in <literal>pgpool.conf</literal>,
- <productname>Pgpool-II</productname> must be restarted.
- Please refer to "Starting/Stopping <productname>Pgpool-II</productname>"
- <xref linkend="example-configs-start-stop-pgpool">.
- After configuring <literal>pgpool.conf</literal> and restarting the
- <productname>Pgpool-II</productname>, let's try the actual replication
- and see if everything is working.
- First, we need to create a database to be replicated. We will name it
- <literal>"bench_replication"</literal>. This database needs to be created
- on all the nodes. Use the
- <ulink url="https://www.postgresql.org/docs/current/static/app-createdb.html">
- <command>createdb</command></ulink> commands through
- <productname>Pgpool-II</productname>, and the database will be created
- on all the nodes.
- <programlisting>
-$ createdb -p 9999 bench_replication
- </programlisting>
- Then, we'll execute <ulink url="https://www.postgresql.org/docs/current/static/pgbench.html">
- <command>pgbench</command></ulink> with <literal>-i</literal> option.
- <literal>-i</literal> option initializes the database with pre-defined tables and data.
- <programlisting>
-$ pgbench -i -p 9999 bench_replication
- </programlisting>
- The following table is the summary of tables and data, which will be created by
- <ulink url="https://www.postgresql.org/docs/current/static/pgbench.html">
- <command>pgbench -i</command></ulink>. If, on all the nodes, the listed tables and
- data are created, replication is working correctly.
- </para>
-
- <table id="example-configs-checking-replication-table">
- <title>data summary</title>
- <tgroup cols="2">
- <thead>
- <row>
- <entry>Table Name</entry>
- <entry>Number of Rows</entry>
- </row>
- </thead>
-
- <tbody>
- <row>
- <entry>branches</entry>
- <entry>1</entry>
- </row>
-
- <row>
- <entry>tellers</entry>
- <entry>10</entry>
- </row>
-
- <row>
- <entry>accounts</entry>
- <entry>100000</entry>
- </row>
-
- <row>
- <entry>history</entry>
- <entry>0</entry>
- </row>
-
- </tbody>
- </tgroup>
- </table>
-
- <para>
- Let's use a simple shell script to check the above on all the nodes.
- The following script will display the number of rows in branches,
- tellers, accounts, and history tables on all the nodes (5432, 5433, 5434).
- <programlisting>
-$ for port in 5432 5433 5434; do
-> echo $port
-> for table_name in branches tellers accounts history; do
-> echo $table_name
-> psql -c "SELECT count(*) FROM $table_name" -p $port bench_replication
-> done
-> done
- </programlisting>
-
- </para>
- </sect3>
- </sect2>
-
- </sect1>
-
- <sect1 id="example-watchdog">
- <title>Watchdog Configuration Example</title>
-
- <para>
- This tutrial explains the simple way to try "Watchdog".
- What you need is 2 Linux boxes on which <productname>
- Pgpool-II</productname> is installed and a <productname>PostgreSQL</>
- on the same machine or in the other one. It is enough
- that 1 node for backend exists.
- You can use watchdog with <productname>
- Pgpool-II</productname> in any mode: replication mode,
- master/slave mode and raw mode.
- </para>
- <para>
- This example uses use "osspc16" as an Active node and
- "osspc20" as a Standby node. "Someserver" means one of them.
- </para>
+ <sect2 id="example-watchdog-configuration">
+ <title>Common configurations</title>
+ <para>
+ Set the following parameters in both of active and standby nodes.
+ </para>
+
+ <sect3 id="example-watchdog-config-enable">
+ <title>Enabling watchdog</title>
+ <para>
+ First of all, set <xref linkend="guc-use-watchdog"> to on.
+ <programlisting>
+ use_watchdog = on
+ # Activates watchdog
+ </programlisting>
+ </para>
+ </sect3>
+
+ <sect3 id="example-watchdog-config-upstream">
+ <title>Configure Up stream servers</title>
+ <para>
+ Specify the up stream servers (e.g. application servers).
+ Leaving it blank is also fine.
+ <programlisting>
+ trusted_servers = ''
+ # trusted server list which are used
+ # to confirm network connection
+ # (hostA,hostB,hostC,...)
+ </programlisting>
+ </para>
+ </sect3>
+
+ <sect3 id="example-watchdog-config-wd-comm">
+ <title>Watchdog Communication</title>
+ <para>
+ Specify the TCP port number for watchdog communication.
+ <programlisting>
+ wd_port = 9000
+ # port number for watchdog service
+ </programlisting>
+ </para>
+ </sect3>
+
+ <sect3 id="example-watchdog-config-wd-vip">
+ <title>Virtual IP</title>
+ <para>
+ Specify the IP address to be used as a virtual IP address
+ in the <xref linkend="guc-delegate-IP">.
+ <programlisting>
+ delegate_IP = '133.137.177.143'
+ # delegate IP address
+ </programlisting>
+ </para>
+ <note>
+ <para>
+ Make sure the IP address configured as a Virtual IP should be
+ free and is not used by any other machine.
+ </para>
+ </note>
+ </sect3>
+ </sect2>
+
+ <sect2 id="example-watchdog-configuration-each-server">
+ <title>Individual Server Configurations</title>
+ <para>
+ Next, set the following parameters for each <productname>
+ Pgpool-II</productname>.
+ Specify <xref linkend="guc-other-pgpool-hostname">,
+ <xref linkend="guc-other-pgpool-port"> and
+ <xref linkend="guc-other-wd-port"> with the values of
+ other <productname>Pgpool-II</productname> server values.
+ </para>
+
+ <sect3 id="example-watchdog-configuration-active-server">
+ <title>Active (osspc16) Server configurations</title>
+ <para>
+ <programlisting>
+ other_pgpool_hostname0 = 'osspc20'
+ # Host name or IP address to connect to for other pgpool 0
+ other_pgpool_port0 = 9999
+ # Port number for othet pgpool 0
+ other_wd_port0 = 9000
+ # Port number for othet watchdog 0
+ </programlisting>
+ </para>
+ </sect3>
+
+ <sect3 id="example-watchdog-configuration-standby-server">
+ <title>Standby (osspc20) Server configurations</title>
+ <para>
+ <programlisting>
+ other_pgpool_hostname0 = 'osspc16'
+ # Host name or IP address to connect to for other pgpool 0
+ other_pgpool_port0 = 9999
+ # Port number for othet pgpool 0
+ other_wd_port0 = 9000
+ # Port number for othet watchdog 0
+ </programlisting>
+ </para>
+ </sect3>
+ </sect2>
+
+ <sect2 id="example-watchdog-start-server">
+ <title>Starting <productname>Pgpool-II</productname></title>
+ <para>
+ Start <productname>Pgpool-II</productname> on each servers from
+ <literal>root</literal> user with <literal>"-n"</literal> switch
+ and redirect log messages into pgpool.log file.
+ </para>
+
+ <sect3 id="example-watchdog-start-active-server">
+ <title>Starting pgpool in Active server (osspc16)</title>
+ <para>
+ First start the <productname>Pgpool-II</productname> on Active server.
+ <programlisting>
+ [user@osspc16]$ su -
+ [root@osspc16]# {installed_dir}/bin/pgpool -n -f {installed_dir}/etc/pgpool.conf > pgpool.log 2>&1
+ </programlisting>
+ Log messages will show that <productname>Pgpool-II</productname>
+ has the virtual IP address and starts watchdog process.
+ <programlisting>
+ LOG: I am announcing my self as master/coordinator watchdog node
+ LOG: I am the cluster leader node
+ DETAIL: our declare coordinator message is accepted by all nodes
+ LOG: I am the cluster leader node. Starting escalation process
+ LOG: escalation process started with PID:59449
+ <emphasis>LOG: watchdog process is initialized
+ LOG: watchdog: escalation started
+ LOG: I am the master watchdog node</emphasis>
+ DETAIL: using the local backend node status
+ </programlisting>
+ </para>
+ </sect3>
+
+ <sect3 id="example-watchdog-start-standby-server">
+ <title>Starting pgpool in Standby server (osspc20)</title>
+ <para>
+ Now start the <productname>Pgpool-II</productname> on Standby server.
+ <programlisting>
+ [user@osspc20]$ su -
+ [root@osspc20]# {installed_dir}/bin/pgpool -n -f {installed_dir}/etc/pgpool.conf > pgpool.log 2>&1
+ </programlisting>
+ Log messages will show that <productname>Pgpool-II</productname>
+ has joind the watchdog cluster as standby watchdog.
+ <programlisting>
+ LOG: watchdog cluster configured with 1 remote nodes
+ LOG: watchdog remote node:0 on Linux_osspc16_9000:9000
+ LOG: interface monitoring is disabled in watchdog
+ LOG: IPC socket path: "/tmp/.s.PGPOOLWD_CMD.9000"
+ LOG: watchdog node state changed from [DEAD] to [LOADING]
+ LOG: new outbond connection to Linux_osspc16_9000:9000
+ LOG: watchdog node state changed from [LOADING] to [INITIALIZING]
+ LOG: watchdog node state changed from [INITIALIZING] to [STANDBY]
+ <emphasis>
+ LOG: successfully joined the watchdog cluster as standby node
+ DETAIL: our join coordinator request is accepted by cluster leader node "Linux_osspc16_9000"
+ LOG: watchdog process is initialized
+ </emphasis>
+ </programlisting>
+ </para>
+ </sect3>
+ </sect2>
+
+ <sect2 id="example-watchdog-try">
+ <title>Try it out</title>
+ <para>
+ Confirm to ping to the virtual IP address.
+ <programlisting>
+ [user@someserver]$ ping 133.137.177.142
+ PING 133.137.177.143 (133.137.177.143) 56(84) bytes of data.
+ 64 bytes from 133.137.177.143: icmp_seq=1 ttl=64 time=0.328 ms
+ 64 bytes from 133.137.177.143: icmp_seq=2 ttl=64 time=0.264 ms
+ 64 bytes from 133.137.177.143: icmp_seq=3 ttl=64 time=0.412 ms
+ </programlisting>
+ Confirm if the Active server which started at first has the virtual IP address.
+ <programlisting>
+ [root@osspc16]# ifconfig
+ eth0 ...
+
+ eth0:0 inet addr:133.137.177.143 ...
+
+ lo ...
+ </programlisting>
+ Confirm if the Standby server which started not at first doesn't have the virtual IP address.
+ <programlisting>
+ [root@osspc20]# ifconfig
+ eth0 ...
+
+ lo ...
+ </programlisting>
+
+ Try to connect <productname>PostgreSQL</productname> by "psql -h delegate_IP -p port".
+ <programlisting>
+ [user@someserver]$ psql -h 133.137.177.142 -p 9999 -l
+ </programlisting>
+ </para>
+ </sect2>
+
+ <sect2 id="example-watchdog-vip-switch">
+ <title>Switching virtual IP</title>
+ <para>
+ Confirm how the Standby server works when the Active server can't provide its service.
+ Stop <productname>Pgpool-II</productname> on the Active server.
+ <programlisting>
+ [root@osspc16]# {installed_dir}/bin/pgpool stop
+ </programlisting>
+
+ Then, the Standby server starts to use the virtual IP address. Log shows:
+
+ <programlisting>
+ <emphasis>
+ LOG: remote node "Linux_osspc16_9000" is shutting down
+ LOG: watchdog cluster has lost the coordinator node
+ </emphasis>
+ LOG: watchdog node state changed from [STANDBY] to [JOINING]
+ LOG: watchdog node state changed from [JOINING] to [INITIALIZING]
+ LOG: I am the only alive node in the watchdog cluster
+ HINT: skiping stand for coordinator state
+ LOG: watchdog node state changed from [INITIALIZING] to [MASTER]
+ LOG: I am announcing my self as master/coordinator watchdog node
+ LOG: I am the cluster leader node
+ DETAIL: our declare coordinator message is accepted by all nodes
+ <emphasis>
+ LOG: I am the cluster leader node. Starting escalation process
+ LOG: watchdog: escalation started
+ </emphasis>
+ LOG: watchdog escalation process with pid: 59551 exit with SUCCESS.
+ </programlisting>
+
+ Confirm to ping to the virtual IP address.
+ <programlisting>
+ [user@someserver]$ ping 133.137.177.142
+ PING 133.137.177.143 (133.137.177.143) 56(84) bytes of data.
+ 64 bytes from 133.137.177.143: icmp_seq=1 ttl=64 time=0.328 ms
+ 64 bytes from 133.137.177.143: icmp_seq=2 ttl=64 time=0.264 ms
+ 64 bytes from 133.137.177.143: icmp_seq=3 ttl=64 time=0.412 ms
+ </programlisting>
+
+ Confirm that the Active server doesn't use the virtual IP address any more.
+ <programlisting>
+ [root@osspc16]# ifconfig
+ eth0 ...
+
+ lo ...
+ </programlisting>
+
+ Confirm that the Standby server uses the virtual IP address.
+ <programlisting>
+ [root@osspc20]# ifconfig
+ eth0 ...
+
+ eth0:0 inet addr:133.137.177.143 ...
+
+ lo ...
+ </programlisting>
+
+ Try to connect <productname>PostgreSQL</productname> by "psql -h delegate_IP -p port".
+ <programlisting>
+ [user@someserver]$ psql -h 133.137.177.142 -p 9999 -l
+ </programlisting>
+
+ </para>
+ </sect2>
+
+ <sect2 id="example-watchdog-more">
+ <title>More</title>
+
+ <sect3 id="example-watchdog-more-lifecheck">
+ <title>Lifecheck</title>
+ <para>
+ There are the parameters about watchdog's monitoring.
+ Specify the interval to check <xref linkend="guc-wd-interval">,
+ the count to retry <xref linkend="guc-wd-life-point">,
+ the qyery to check <xref linkend="guc-wd-lifecheck-query"> and
+ finaly the type of lifecheck <xref linkend="guc-wd-lifecheck-method">.
+ <programlisting>
+ wd_lifecheck_method = 'query'
+ # Method of watchdog lifecheck ('heartbeat' or 'query' or 'external')
+ # (change requires restart)
+ wd_interval = 10
+ # lifecheck interval (sec) > 0
+ wd_life_point = 3
+ # lifecheck retry times
+ wd_lifecheck_query = 'SELECT 1'
+ # lifecheck query to pgpool from watchdog
+ </programlisting>
+
+ </para>
+ </sect3>
+
+ <sect3 id="example-watchdog-more-vip-switching">
+ <title>Switching virtual IP address</title>
+ <para>
+ There are the parameters for switching the virtual IP address.
+ Specify switching commands <xref linkend="guc-if-up-cmd">,
+ <xref linkend="guc-if-down-cmd">, the path to them
+ <xref linkend="guc-if-cmd-path">, the command executed after
+ switching to send ARP request <xref linkend="guc-arping-cmd">
+ and the path to it <xref linkend="guc-arping-path">.
+ <programlisting>
+ ifconfig_path = '/sbin'
+ # ifconfig command path
+ if_up_cmd = 'ifconfig eth0:0 inet $_IP_$ netmask 255.255.255.0'
+ # startup delegate IP command
+ if_down_cmd = 'ifconfig eth0:0 down'
+ # shutdown delegate IP command
+
+ arping_path = '/usr/sbin' # arping command path
+
+ arping_cmd = 'arping -U $_IP_$ -w 1'
+ </programlisting>
+ You can also use the custom scripts to bring up and bring down the
+ virtual IP using <xref linkend="guc-wd-escalation-command"> and
+ <xref linkend="guc-wd-de-escalation-command"> configurations.
+ </para>
+ </sect3>
+
+ </sect2>
+ </sect1>
+
+ <sect1 id="example-AWS">
+ <title>AWS Configuration Example</title>
- <sect2 id="example-watchdog-configuration">
- <title>Common configurations</title>
<para>
- Set the following parameters in both of active and standby nodes.
+ This tutrial explains the simple way to try "Watchdog"
+ on <ulink url="https://aws.amazon.com/">AWS</ulink> and using
+ the <ulink url="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html">
+ Elastic IP Address</ulink> as the Virtual IP for the high availability solution.
+ <note>
+ <para>
+ You can use watchdog with <productname>
+ Pgpool-II</productname> in any mode: replication mode,
+ master/slave mode and raw mode.
+ </para>
+ </note>
</para>
- <sect3 id="example-watchdog-config-enable">
- <title>Enabling watchdog</title>
- <para>
- First of all, set <xref linkend="guc-use-watchdog"> to on.
- <programlisting>
-use_watchdog = on
- # Activates watchdog
- </programlisting>
- </para>
- </sect3>
-
- <sect3 id="example-watchdog-config-upstream">
- <title>Configure Up stream servers</title>
- <para>
- Specify the up stream servers (e.g. application servers).
- Leaving it blank is also fine.
- <programlisting>
-trusted_servers = ''
- # trusted server list which are used
- # to confirm network connection
- # (hostA,hostB,hostC,...)
- </programlisting>
- </para>
- </sect3>
-
- <sect3 id="example-watchdog-config-wd-comm">
- <title>Watchdog Communication</title>
- <para>
- Specify the TCP port number for watchdog communication.
- <programlisting>
-wd_port = 9000
- # port number for watchdog service
- </programlisting>
- </para>
- </sect3>
-
- <sect3 id="example-watchdog-config-wd-vip">
- <title>Virtual IP</title>
- <para>
- Specify the IP address to be used as a virtual IP address
- in the <xref linkend="guc-delegate-IP">.
- <programlisting>
-delegate_IP = '133.137.177.143'
- # delegate IP address
- </programlisting>
- </para>
- <note>
- <para>
- Make sure the IP address configured as a Virtual IP should be
- free and is not used by any other machine.
- </para>
- </note>
- </sect3>
- </sect2>
-
- <sect2 id="example-watchdog-configuration-each-server">
- <title>Individual Server Configurations</title>
- <para>
- Next, set the following parameters for each <productname>
- Pgpool-II</productname>.
- Specify <xref linkend="guc-other-pgpool-hostname">,
- <xref linkend="guc-other-pgpool-port"> and
- <xref linkend="guc-other-wd-port"> with the values of
- other <productname>Pgpool-II</productname> server values.
- </para>
+ <sect2 id="example-AWS-setup">
+ <title>AWS Setup</title>
+ <para>
+ For this example, we will use two node <productname>
+ Pgpool-II</productname> watchdog cluster. So we will set up two
+ Linux Amazon EC2 instances and one Elastic IP address.
+ So for this example, do the following steps:
+ </para>
+ <itemizedlist>
+
+ <listitem>
+ <para>
+ Launch two Linux Amazon EC2 instances. For this example, we name these
+ instances as "instance-1" and "instance-2"
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ Configure the security group for the instances and allow inbound traffic
+ on ports used by pgpool-II and watchdog.
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ Install the <productname>Pgpool-II</productname> on both instances.
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ Allocate an Elastic IP address.
+ For this example, we will use "35.163.178.3" as an Elastic IP address"
+ </para>
+ </listitem>
+
+ </itemizedlist>
+
+ </sect2>
+
+ <sect2 id="example-AWS-pgpool-config">
+ <title><productname>Pgpool-II</productname> configurations</title>
+ <para>
+ Mostly the <productname>Pgpool-II</productname> configurations for this
+ example will be same as in the <xref linkend="example-watchdog">, except the
+ <xref linkend="guc-delegate-ip"> which we will not set in this example instead
+ we will use <xref linkend="guc-wd-escalation-command"> and
+ <xref linkend="guc-wd-de-escalation-command"> to switch the
+ Elastic IP address to the maste/Active <productname>Pgpool-II</productname> node.
+ </para>
+
+ <sect3 id="example-AWS-pgpool-config-instance-1">
+ <title><productname>Pgpool-II</productname> configurations on Instance-1</title>
+ <para>
- <sect3 id="example-watchdog-configuration-active-server">
- <title>Active (osspc16) Server configurations</title>
- <para>
- <programlisting>
-other_pgpool_hostname0 = 'osspc20'
- # Host name or IP address to connect to for other pgpool 0
-other_pgpool_port0 = 9999
- # Port number for othet pgpool 0
-other_wd_port0 = 9000
- # Port number for othet watchdog 0
- </programlisting>
- </para>
- </sect3>
-
- <sect3 id="example-watchdog-configuration-standby-server">
- <title>Standby (osspc20) Server configurations</title>
- <para>
- <programlisting>
-other_pgpool_hostname0 = 'osspc16'
- # Host name or IP address to connect to for other pgpool 0
-other_pgpool_port0 = 9999
- # Port number for othet pgpool 0
-other_wd_port0 = 9000
- # Port number for othet watchdog 0
- </programlisting>
- </para>
- </sect3>
- </sect2>
-
- <sect2 id="example-watchdog-start-server">
- <title>Starting <productname>Pgpool-II</productname></title>
- <para>
- Start <productname>Pgpool-II</productname> on each servers from
- <literal>root</literal> user with <literal>"-n"</literal> switch
- and redirect log messages into pgpool.log file.
- </para>
-
- <sect3 id="example-watchdog-start-active-server">
- <title>Starting pgpool in Active server (osspc16)</title>
- <para>
- First start the <productname>Pgpool-II</productname> on Active server.
- <programlisting>
-[user@osspc16]$ su -
-[root@osspc16]# {installed_dir}/bin/pgpool -n -f {installed_dir}/etc/pgpool.conf > pgpool.log 2>&1
- </programlisting>
- Log messages will show that <productname>Pgpool-II</productname>
- has the virtual IP address and starts watchdog process.
- <programlisting>
-LOG: I am announcing my self as master/coordinator watchdog node
-LOG: I am the cluster leader node
-DETAIL: our declare coordinator message is accepted by all nodes
-LOG: I am the cluster leader node. Starting escalation process
-LOG: escalation process started with PID:59449
-<emphasis>LOG: watchdog process is initialized
-LOG: watchdog: escalation started
-LOG: I am the master watchdog node</emphasis>
-DETAIL: using the local backend node status
- </programlisting>
- </para>
- </sect3>
-
- <sect3 id="example-watchdog-start-standby-server">
- <title>Starting pgpool in Standby server (osspc20)</title>
- <para>
- Now start the <productname>Pgpool-II</productname> on Standby server.
- <programlisting>
-[user@osspc20]$ su -
-[root@osspc20]# {installed_dir}/bin/pgpool -n -f {installed_dir}/etc/pgpool.conf > pgpool.log 2>&1
- </programlisting>
- Log messages will show that <productname>Pgpool-II</productname>
- has joind the watchdog cluster as standby watchdog.
- <programlisting>
-LOG: watchdog cluster configured with 1 remote nodes
-LOG: watchdog remote node:0 on Linux_osspc16_9000:9000
-LOG: interface monitoring is disabled in watchdog
-LOG: IPC socket path: "/tmp/.s.PGPOOLWD_CMD.9000"
-LOG: watchdog node state changed from [DEAD] to [LOADING]
-LOG: new outbond connection to Linux_osspc16_9000:9000
-LOG: watchdog node state changed from [LOADING] to [INITIALIZING]
-LOG: watchdog node state changed from [INITIALIZING] to [STANDBY]
-<emphasis>
-LOG: successfully joined the watchdog cluster as standby node
-DETAIL: our join coordinator request is accepted by cluster leader node "Linux_osspc16_9000"
-LOG: watchdog process is initialized
-</emphasis>
- </programlisting>
- </para>
- </sect3>
- </sect2>
-
- <sect2 id="example-watchdog-try">
- <title>Try it out</title>
- <para>
- Confirm to ping to the virtual IP address.
- <programlisting>
-[user@someserver]$ ping 133.137.177.142
-PING 133.137.177.143 (133.137.177.143) 56(84) bytes of data.
-64 bytes from 133.137.177.143: icmp_seq=1 ttl=64 time=0.328 ms
-64 bytes from 133.137.177.143: icmp_seq=2 ttl=64 time=0.264 ms
-64 bytes from 133.137.177.143: icmp_seq=3 ttl=64 time=0.412 ms
- </programlisting>
- Confirm if the Active server which started at first has the virtual IP address.
- <programlisting>
-[root@osspc16]# ifconfig
-eth0 ...
-
-eth0:0 inet addr:133.137.177.143 ...
-
-lo ...
- </programlisting>
- Confirm if the Standby server which started not at first doesn't have the virtual IP address.
- <programlisting>
-[root@osspc20]# ifconfig
-eth0 ...
-
-lo ...
- </programlisting>
-
- Try to connect <productname>PostgreSQL</> by "psql -h delegate_IP -p port".
- <programlisting>
-[user@someserver]$ psql -h 133.137.177.142 -p 9999 -l
- </programlisting>
- </para>
- </sect2>
-
- <sect2 id="example-watchdog-vip-switch">
- <title>Switching virtual IP</title>
- <para>
- Confirm how the Standby server works when the Active server can't provide its service.
- Stop <productname>Pgpool-II</productname> on the Active server.
- <programlisting>
-[root@osspc16]# {installed_dir}/bin/pgpool stop
- </programlisting>
-
- Then, the Standby server starts to use the virtual IP address. Log shows:
-
- <programlisting>
-<emphasis>
-LOG: remote node "Linux_osspc16_9000" is shutting down
-LOG: watchdog cluster has lost the coordinator node
-</emphasis>
-LOG: watchdog node state changed from [STANDBY] to [JOINING]
-LOG: watchdog node state changed from [JOINING] to [INITIALIZING]
-LOG: I am the only alive node in the watchdog cluster
-HINT: skiping stand for coordinator state
-LOG: watchdog node state changed from [INITIALIZING] to [MASTER]
-LOG: I am announcing my self as master/coordinator watchdog node
-LOG: I am the cluster leader node
-DETAIL: our declare coordinator message is accepted by all nodes
-<emphasis>
-LOG: I am the cluster leader node. Starting escalation process
-LOG: watchdog: escalation started
-</emphasis>
-LOG: watchdog escalation process with pid: 59551 exit with SUCCESS.
- </programlisting>
-
- Confirm to ping to the virtual IP address.
- <programlisting>
-[user@someserver]$ ping 133.137.177.142
-PING 133.137.177.143 (133.137.177.143) 56(84) bytes of data.
-64 bytes from 133.137.177.143: icmp_seq=1 ttl=64 time=0.328 ms
-64 bytes from 133.137.177.143: icmp_seq=2 ttl=64 time=0.264 ms
-64 bytes from 133.137.177.143: icmp_seq=3 ttl=64 time=0.412 ms
- </programlisting>
-
- Confirm that the Active server doesn't use the virtual IP address any more.
- <programlisting>
-[root@osspc16]# ifconfig
-eth0 ...
-
-lo ...
- </programlisting>
-
- Confirm that the Standby server uses the virtual IP address.
- <programlisting>
-[root@osspc20]# ifconfig
-eth0 ...
-
-eth0:0 inet addr:133.137.177.143 ...
-
-lo ...
- </programlisting>
-
- Try to connect <productname>PostgreSQL</> by "psql -h delegate_IP -p port".
- <programlisting>
-[user@someserver]$ psql -h 133.137.177.142 -p 9999 -l
- </programlisting>
+ <programlisting>
+ use_watchdog = on
+ delegate_IP = ''
+ wd_hostname = 'instance-1-private-ip'
+ other_pgpool_hostname0 = 'instance-2-private-ip'
+ other_pgpool_port0 = 9999
+ other_wd_port0 = 9000
+ wd_escalation_command = '$path_to_script/aws-escalation.sh'
+ wd_de_escalation_command = '$path_to_script/aws-de-escalation.sh'
+ </programlisting>
- </para>
- </sect2>
-
- <sect2 id="example-watchdog-more">
- <title>More</title>
-
- <sect3 id="example-watchdog-more-lifecheck">
- <title>Lifecheck</title>
- <para>
- There are the parameters about watchdog's monitoring.
- Specify the interval to check <xref linkend="guc-wd-interval">,
- the count to retry <xref linkend="guc-wd-life-point">,
- the qyery to check <xref linkend="guc-wd-lifecheck-query"> and
- finaly the type of lifecheck <xref linkend="guc-wd-lifecheck-method">.
- <programlisting>
-wd_lifecheck_method = 'query'
- # Method of watchdog lifecheck ('heartbeat' or 'query' or 'external')
- # (change requires restart)
-wd_interval = 10
- # lifecheck interval (sec) > 0
-wd_life_point = 3
- # lifecheck retry times
-wd_lifecheck_query = 'SELECT 1'
- # lifecheck query to pgpool from watchdog
- </programlisting>
-
- </para>
- </sect3>
-
- <sect3 id="example-watchdog-more-vip-switching">
- <title>Switching virtual IP address</title>
- <para>
- There are the parameters for switching the virtual IP address.
- Specify switching commands <xref linkend="guc-if-up-cmd">,
- <xref linkend="guc-if-down-cmd">, the path to them
- <xref linkend="guc-if-cmd-path">, the command executed after
- switching to send ARP request <xref linkend="guc-arping-cmd">
- and the path to it <xref linkend="guc-arping-path">.
- <programlisting>
-ifconfig_path = '/sbin'
- # ifconfig command path
-if_up_cmd = 'ifconfig eth0:0 inet $_IP_$ netmask 255.255.255.0'
- # startup delegate IP command
-if_down_cmd = 'ifconfig eth0:0 down'
- # shutdown delegate IP command
-
-arping_path = '/usr/sbin' # arping command path
-
-arping_cmd = 'arping -U $_IP_$ -w 1'
- </programlisting>
- You can also use the custom scripts to bring up and bring down the
- virtual IP using <xref linkend="guc-wd-escalation-command"> and
- <xref linkend="guc-wd-de-escalation-command"> configurations.
- </para>
- </sect3>
-
- </sect2>
- </sect1>
-
- <sect1 id="example-AWS">
- <title>AWS Configuration Example</title>
+ </para>
+ </sect3>
- <para>
- This tutrial explains the simple way to try "Watchdog"
- on <ulink url="https://aws.amazon.com/">AWS</ulink> and using
- the <ulink url="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/elastic-ip-addresses-eip.html">
- Elastic IP Address</ulink> as the Virtual IP for the high availability solution.
- <note>
- <para>
- You can use watchdog with <productname>
- Pgpool-II</productname> in any mode: replication mode,
- master/slave mode and raw mode.
- </para>
- </note>
- </para>
+ <sect3 id="example-AWS-pgpool-config-instance-2">
+ <title><productname>Pgpool-II</productname> configurations on Instance-2</title>
+ <para>
- <sect2 id="example-AWS-setup">
- <title>AWS Setup</title>
- <para>
- For this example, we will use two node <productname>
- Pgpool-II</productname> watchdog cluster. So we will set up two
- Linux Amazon EC2 instances and one Elastic IP address.
- So for this example, do the following steps:
- </para>
- <itemizedlist>
-
- <listitem>
- <para>
- Launch two Linux Amazon EC2 instances. For this example, we name these
- instances as "instance-1" and "instance-2"
- </para>
- </listitem>
-
- <listitem>
- <para>
- Configure the security group for the instances and allow inbound traffic
- on ports used by pgpool-II and watchdog.
- </para>
- </listitem>
-
- <listitem>
- <para>
- Install the <productname>Pgpool-II</productname> on both instances.
- </para>
- </listitem>
-
- <listitem>
- <para>
- Allocate an Elastic IP address.
- For this example, we will use "35.163.178.3" as an Elastic IP address"
- </para>
- </listitem>
-
- </itemizedlist>
-
- </sect2>
-
- <sect2 id="example-AWS-pgpool-config">
- <title><productname>Pgpool-II</productname> configurations</title>
- <para>
- Mostly the <productname>Pgpool-II</productname> configurations for this
- example will be same as in the <xref linkend="example-watchdog">, except the
- <xref linkend="guc-delegate-ip"> which we will not set in this example instead
- we will use <xref linkend="guc-wd-escalation-command"> and
- <xref linkend="guc-wd-de-escalation-command"> to switch the
- Elastic IP address to the maste/Active <productname>Pgpool-II</productname> node.
- </para>
-
- <sect3 id="example-AWS-pgpool-config-instance-1">
- <title><productname>Pgpool-II</productname> configurations on Instance-1</title>
- <para>
-
- <programlisting>
-use_watchdog = on
-delegate_IP = ''
-wd_hostname = 'instance-1-private-ip'
-other_pgpool_hostname0 = 'instance-2-private-ip'
-other_pgpool_port0 = 9999
-other_wd_port0 = 9000
-wd_escalation_command = '$path_to_script/aws-escalation.sh'
-wd_de_escalation_command = '$path_to_script/aws-de-escalation.sh'
- </programlisting>
-
- </para>
- </sect3>
-
- <sect3 id="example-AWS-pgpool-config-instance-2">
- <title><productname>Pgpool-II</productname> configurations on Instance-2</title>
- <para>
-
- <programlisting>
-use_watchdog = on
-delegate_IP = ''
-wd_hostname = 'instance-2-private-ip'
-other_pgpool_hostname0 = 'instance-1-private-ip'
-other_pgpool_port0 = 9999
-other_wd_port0 = 9000
-wd_escalation_command = '$path_to_script/aws-escalation.sh'
-wd_de_escalation_command = '$path_to_script/aws-de-escalation.sh'
- </programlisting>
-
- </para>
- </sect3>
- </sect2>
-
- <sect2 id="example-AWS-pgpool-aws-escalation-instance">
- <title>escalation and de-escalation Scripts</title>
- <para>
- Create the aws-escalation.sh and aws-de-escalation.sh scripts on both
- instances and point the <xref linkend="guc-wd-escalation-command"> and
- <xref linkend="guc-wd-de-escalation-command"> to the respective scripts.
- </para>
-
- <note>
- <para>
- You may need to configure the AWS CLI first on all AWS instances
- to enable the execution of commands used by wd-escalation.sh and wd-de-escalation.sh.
- See <ulink url="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html">configure AWS CLI</ulink>
- </para>
- </note>
+ <programlisting>
+ use_watchdog = on
+ delegate_IP = ''
+ wd_hostname = 'instance-2-private-ip'
+ other_pgpool_hostname0 = 'instance-1-private-ip'
+ other_pgpool_port0 = 9999
+ other_wd_port0 = 9000
+ wd_escalation_command = '$path_to_script/aws-escalation.sh'
+ wd_de_escalation_command = '$path_to_script/aws-de-escalation.sh'
+ </programlisting>
- <sect3 id="example-AWS-pgpool-aws-escalation-script">
- <title>escalation script</title>
+ </para>
+ </sect3>
+ </sect2>
+
+ <sect2 id="example-AWS-pgpool-aws-escalation-instance">
+ <title>escalation and de-escalation Scripts</title>
+ <para>
+ Create the aws-escalation.sh and aws-de-escalation.sh scripts on both
+ instances and point the <xref linkend="guc-wd-escalation-command"> and
+ <xref linkend="guc-wd-de-escalation-command"> to the respective scripts.
+ </para>
+
+ <note>
+ <para>
+ You may need to configure the AWS CLI first on all AWS instances
+ to enable the execution of commands used by wd-escalation.sh and wd-de-escalation.sh.
+ See <ulink url="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html">configure AWS CLI</ulink>
+ </para>
+ </note>
+
+ <sect3 id="example-AWS-pgpool-aws-escalation-script">
+ <title>escalation script</title>
+
+ <para>
+ This script will be executed by the watchdog
+ to assign the Elastic IP on the instance when the watchdog becomes the active/master node.
+ Change the INSTANCE_ID and ELASTIC_IP values as per your AWS setup values.
+ </para>
+ <para>
+ <emphasis>aws-escalation.sh:</emphasis>
+ <programlisting>
+ #! /bin/sh
- <para>
- This script will be executed by the watchdog
- to assign the Elastic IP on the instance when the watchdog becomes the active/master node.
- Change the INSTANCE_ID and ELASTIC_IP values as per your AWS setup values.
- </para>
- <para>
- <emphasis>aws-escalation.sh:</emphasis>
- <programlisting>
-#! /bin/sh
+ ELASTIC_IP=35.163.178.3
+ # replace it with the Elastic IP address you
+ # allocated from the aws console
+ INSTANCE_ID=i-0a9b64e449b17ed4b
+ # replace it with the instance id of the Instance
+ # this script is installed on
-ELASTIC_IP=35.163.178.3
- # replace it with the Elastic IP address you
- # allocated from the aws console
-INSTANCE_ID=i-0a9b64e449b17ed4b
- # replace it with the instance id of the Instance
- # this script is installed on
+ echo "Assigning Elastic IP $ELASTIC_IP to the instance $INSTANCE_ID"
+ # bring up the Elastic IP
+ aws ec2 associate-address --instance-id $INSTANCE_ID --public-ip $ELASTIC_IP
-echo "Assigning Elastic IP $ELASTIC_IP to the instance $INSTANCE_ID"
-# bring up the Elastic IP
-aws ec2 associate-address --instance-id $INSTANCE_ID --public-ip $ELASTIC_IP
+ exit 0
+ </programlisting>
-exit 0
- </programlisting>
+ </para>
- </para>
+ </sect3>
+ <sect3 id="example-AWS-pgpool-aws-de-escalation-script">
+ <title>de-escalation script</title>
- </sect3>
- <sect3 id="example-AWS-pgpool-aws-de-escalation-script">
- <title>de-escalation script</title>
+ <para>
+ This script will be executed by watchdog
+ to remove the Elastic IP from the instance when the watchdog resign from the active/master node.
+ </para>
+ <para>
+ <emphasis>aws-de-escalation.sh:</emphasis>
+ <programlisting>
+ #! /bin/sh
- <para>
- This script will be executed by watchdog
- to remove the Elastic IP from the instance when the watchdog resign from the active/master node.
- </para>
- <para>
- <emphasis>aws-de-escalation.sh:</emphasis>
- <programlisting>
-#! /bin/sh
-
-ELASTIC_IP=35.163.178.3
- # replace it with the Elastic IP address you
- # allocated from the aws console
-
-echo "disassociating the Elastic IP $ELASTIC_IP from the instance"
-# bring down the Elastic IP
-aws ec2 disassociate-address --public-ip $ELASTIC_IP
-exit 0
- </programlisting>
- </para>
- </sect3>
-
- <bibliography>
- <title>AWS Command References</title>
-
- <biblioentry>
- <biblioset relation="article">
- <title><ulink url="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html">Configure AWS CLI</ulink></title>
- </biblioset>
- <biblioset relation="book">
- <title>AWS Documentation: Configuring the AWS Command Line Interface</title>
- </biblioset>
- </biblioentry>
-
- <biblioentry>
- <biblioset relation="article">
- <title><ulink url="http://docs.aws.amazon.com/cli/latest/reference/ec2/associate-address.html">associate-address</ulink></title>
- </biblioset>
- <biblioset relation="book">
- <title>AWS Documentation: associate-address reference</title>
- </biblioset>
- </biblioentry>
-
- <biblioentry>
- <biblioset relation="article">
- <title><ulink url="http://docs.aws.amazon.com/cli/latest/reference/ec2/disassociate-address.html">disassociate-address</ulink></title>
- </biblioset>
- <biblioset relation="book">
- <title>AWS Documentation: disassociate-address reference</title>
- </biblioset>
- </biblioentry>
-
- </bibliography>
- </sect2>
-
- <sect2 id="example-AWS-try">
- <title>Try it out</title>
- <para>
- Start <productname>Pgpool-II</productname> on each server with "-n" switch
- and redirect log messages to the pgpool.log file.
- The log message of master/active <productname>Pgpool-II</productname> node
- will show the message of Elastic IP assignment.
- <programlisting>
-LOG: I am the cluster leader node. Starting escalation process
-LOG: escalation process started with PID:23543
-LOG: watchdog: escalation started
-<emphasis>
-Assigning Elastic IP 35.163.178.3 to the instance i-0a9b64e449b17ed4b
-{
- "AssociationId": "eipassoc-39853c42"
-}
-</emphasis>
-LOG: watchdog escalation successful
-LOG: watchdog escalation process with pid: 23543 exit with SUCCESS.
- </programlisting>
- </para>
+ ELASTIC_IP=35.163.178.3
+ # replace it with the Elastic IP address you
+ # allocated from the aws console
- <para>
- Confirm to ping to the Elastic IP address.
- <programlisting>
-[user@someserver]$ ping 35.163.178.3
-PING 35.163.178.3 (35.163.178.3) 56(84) bytes of data.
-64 bytes from 35.163.178.3: icmp_seq=1 ttl=64 time=0.328 ms
-64 bytes from 35.163.178.3: icmp_seq=2 ttl=64 time=0.264 ms
-64 bytes from 35.163.178.3: icmp_seq=3 ttl=64 time=0.412 ms
- </programlisting>
- </para>
+ echo "disassociating the Elastic IP $ELASTIC_IP from the instance"
+ # bring down the Elastic IP
+ aws ec2 disassociate-address --public-ip $ELASTIC_IP
+ exit 0
+ </programlisting>
+ </para>
+ </sect3>
+
+ <bibliography>
+ <title>AWS Command References</title>
+
+ <biblioentry>
+ <biblioset relation="article">
+ <title><ulink url="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html">Configure AWS CLI</ulink></title>
+ </biblioset>
+ <biblioset relation="book">
+ <title>AWS Documentation: Configuring the AWS Command Line Interface</title>
+ </biblioset>
+ </biblioentry>
+
+ <biblioentry>
+ <biblioset relation="article">
+ <title><ulink url="http://docs.aws.amazon.com/cli/latest/reference/ec2/associate-address.html">associate-address</ulink></title>
+ </biblioset>
+ <biblioset relation="book">
+ <title>AWS Documentation: associate-address reference</title>
+ </biblioset>
+ </biblioentry>
+
+ <biblioentry>
+ <biblioset relation="article">
+ <title><ulink url="http://docs.aws.amazon.com/cli/latest/reference/ec2/disassociate-address.html">disassociate-address</ulink></title>
+ </biblioset>
+ <biblioset relation="book">
+ <title>AWS Documentation: disassociate-address reference</title>
+ </biblioset>
+ </biblioentry>
+
+ </bibliography>
+ </sect2>
+
+ <sect2 id="example-AWS-try">
+ <title>Try it out</title>
+ <para>
+ Start <productname>Pgpool-II</productname> on each server with "-n" switch
+ and redirect log messages to the pgpool.log file.
+ The log message of master/active <productname>Pgpool-II</productname> node
+ will show the message of Elastic IP assignment.
+ <programlisting>
+ LOG: I am the cluster leader node. Starting escalation process
+ LOG: escalation process started with PID:23543
+ LOG: watchdog: escalation started
+ <emphasis>
+ Assigning Elastic IP 35.163.178.3 to the instance i-0a9b64e449b17ed4b
+ {
+ "AssociationId": "eipassoc-39853c42"
+ }
+ </emphasis>
+ LOG: watchdog escalation successful
+ LOG: watchdog escalation process with pid: 23543 exit with SUCCESS.
+ </programlisting>
+ </para>
+
+ <para>
+ Confirm to ping to the Elastic IP address.
+ <programlisting>
+ [user@someserver]$ ping 35.163.178.3
+ PING 35.163.178.3 (35.163.178.3) 56(84) bytes of data.
+ 64 bytes from 35.163.178.3: icmp_seq=1 ttl=64 time=0.328 ms
+ 64 bytes from 35.163.178.3: icmp_seq=2 ttl=64 time=0.264 ms
+ 64 bytes from 35.163.178.3: icmp_seq=3 ttl=64 time=0.412 ms
+ </programlisting>
+ </para>
+
+ <para>
+ Try to connect <productname>PostgreSQL</productname> by "psql -h ELASTIC_IP -p port".
+ <programlisting>
+ [user@someserver]$ psql -h 35.163.178.3 -p 9999 -l
+ </programlisting>
+ </para>
+ </sect2>
+
+ <sect2 id="example-AWS-vip-switch">
+ <title>Switching Elastic IP</title>
+ <para>
+ To confirm if the Standby server acquires the Elastic IP when the
+ Active server becomes unavailable, Stop the <productname>Pgpool-II</productname>
+ on the Active server. Then, the Standby server should start using the Elastic IP address,
+ And the <productname>Pgpool-II</productname> log will show the below messages.
+
+ <programlisting>
+ <emphasis>
+ LOG: remote node "172.31.2.94:9999 [Linux ip-172-31-2-94]" is shutting down
+ LOG: watchdog cluster has lost the coordinator node
+ </emphasis>
+ LOG: watchdog node state changed from [STANDBY] to [JOINING]
+ LOG: watchdog node state changed from [JOINING] to [INITIALIZING]
+ LOG: I am the only alive node in the watchdog cluster
+ HINT: skiping stand for coordinator state
+ LOG: watchdog node state changed from [INITIALIZING] to [MASTER]
+ LOG: I am announcing my self as master/coordinator watchdog node
+ LOG: I am the cluster leader node
+ DETAIL: our declare coordinator message is accepted by all nodes
+ LOG: I am the cluster leader node. Starting escalation process
+ LOG: escalation process started with PID:23543
+ LOG: watchdog: escalation started
+ <emphasis>
+ Assigning Elastic IP 35.163.178.3 to the instance i-0dd3e60734a6ebe14
+ {
+ "AssociationId": "eipassoc-39853c42"
+ }
+ </emphasis>
+ LOG: watchdog escalation successful
+ LOG: watchdog escalation process with pid: 61581 exit with SUCCESS.
+ </programlisting>
+ Confirm to ping to the Elastic IP address again.
+ <programlisting>
+ [user@someserver]$ ping 35.163.178.3
+ PING 35.163.178.3 (35.163.178.3) 56(84) bytes of data.
+ 64 bytes from 35.163.178.3: icmp_seq=1 ttl=64 time=0.328 ms
+ 64 bytes from 35.163.178.3: icmp_seq=2 ttl=64 time=0.264 ms
+ 64 bytes from 35.163.178.3: icmp_seq=3 ttl=64 time=0.412 ms
+ </programlisting>
+ </para>
+ <para>
+ Try to connect <productname>PostgreSQL</productname> by "psql -h ELASTIC_IP -p port".
+ <programlisting>
+ [user@someserver]$ psql -h 35.163.178.3 -p 9999 -l
+ </programlisting>
+ </para>
+ </sect2>
+ </sect1>
+
+ <sect1 id="example-Aurora">
+ <title>Aurora Configuration Example</title>
<para>
- Try to connect <productname>PostgreSQL</> by "psql -h ELASTIC_IP -p port".
- <programlisting>
-[user@someserver]$ psql -h 35.163.178.3 -p 9999 -l
- </programlisting>
+ <productname>Amazon Aurora for PostgreSQL
+ Compatibility</productname> (Aurora) is a managed service for
+ <productname>PostgreSQL</productname>. From user's point of
+ view, <productname>Aurora</productname> can be regarded as a
+ streaming replication cluster with some exceptions. First,
+ fail over and online recovery are managed
+ by <productname>Aurora</productname>. So you don't need to
+ set <xref linkend="guc-failover-command">, <xref linkend="guc-follow-master-command">,
+ and recovery related parameters. In this section we explain
+ how to set up <productname>Pgpool-II</productname> for Aurora.
</para>
- </sect2>
- <sect2 id="example-AWS-vip-switch">
- <title>Switching Elastic IP</title>
- <para>
- To confirm if the Standby server acquires the Elastic IP when the
- Active server becomes unavailable, Stop the <productname>Pgpool-II</productname>
- on the Active server. Then, the Standby server should start using the Elastic IP address,
- And the <productname>Pgpool-II</productname> log will show the below messages.
-
- <programlisting>
-<emphasis>
-LOG: remote node "172.31.2.94:9999 [Linux ip-172-31-2-94]" is shutting down
-LOG: watchdog cluster has lost the coordinator node
-</emphasis>
-LOG: watchdog node state changed from [STANDBY] to [JOINING]
-LOG: watchdog node state changed from [JOINING] to [INITIALIZING]
-LOG: I am the only alive node in the watchdog cluster
-HINT: skiping stand for coordinator state
-LOG: watchdog node state changed from [INITIALIZING] to [MASTER]
-LOG: I am announcing my self as master/coordinator watchdog node
-LOG: I am the cluster leader node
-DETAIL: our declare coordinator message is accepted by all nodes
-LOG: I am the cluster leader node. Starting escalation process
-LOG: escalation process started with PID:23543
-LOG: watchdog: escalation started
-<emphasis>
-Assigning Elastic IP 35.163.178.3 to the instance i-0dd3e60734a6ebe14
-{
- "AssociationId": "eipassoc-39853c42"
-}
-</emphasis>
-LOG: watchdog escalation successful
-LOG: watchdog escalation process with pid: 61581 exit with SUCCESS.
- </programlisting>
- Confirm to ping to the Elastic IP address again.
- <programlisting>
-[user@someserver]$ ping 35.163.178.3
-PING 35.163.178.3 (35.163.178.3) 56(84) bytes of data.
-64 bytes from 35.163.178.3: icmp_seq=1 ttl=64 time=0.328 ms
-64 bytes from 35.163.178.3: icmp_seq=2 ttl=64 time=0.264 ms
-64 bytes from 35.163.178.3: icmp_seq=3 ttl=64 time=0.412 ms
- </programlisting>
- </para>
- <para>
- Try to connect <productname>PostgreSQL</> by "psql -h ELASTIC_IP -p port".
- <programlisting>
-[user@someserver]$ psql -h 35.163.178.3 -p 9999 -l
- </programlisting>
- </para>
- </sect2>
- </sect1>
+ <sect2 id="example-Aurora-config">
+ <title>Setting pgpool.conf for Aurora</title>
+ <para>
+ <itemizedlist>
+ <listitem>
+ <para>
+ Create <filename>pgpool.conf</filename>
+ from <filename>pgpool.conf.sample-stream</filename>.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ Set <xref linkend="guc-sr-check-period"> to 0 to
+ disable streaming replication delay checking.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ Enable <xref linkend="guc-enable-pool-hba"> to on so
+ that md5 authentication is enabled
+ (<productname>Aurora</productname> always use md5
+ authentication).
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ Create <filename>pool_password</filename>. See <xref linkend="auth-md5">
+ for more details.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ Set <xref linkend="guc-backend-hostname">0 for the
+ Aurora writer node. Set
+ other <xref linkend="guc-backend-hostname"> for the
+ Aurora reader node. Set
+ appropreate <xref linkend="guc-backend-weight"> as
+ usual. You don't need to
+ set <xref linkend="guc-backend-data-directory">
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ Set <varname>ALWAYS_MASTER</varname> flag to
+ the <xref linkend="guc-backend-flag"> for the master
+ node.
+ </para>
+ </listitem>
+ </itemizedlist>
+ </para>
+ </sect2>
+ </sect1>
</chapter>
</part>