![]() This can't be the way things are supposed to be: NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT So, after about 15m of heavy writes, the ZFS pool fragmentation went from 0% (initial) to 14%. However, the COW nature of ZFS seems to be making the filesystem use up the free space and fragment it heavily. I do see that the allocated space is mostly staying the same, so it's not increasing disk usage. The application is mostly making UPDATEs, but Postgres should be able to reuse the space already on disk (as it makes dead tuples and autovacuums them, it will re-use those pages). I even created a new ZFS pool and moved the data there using zfs send / recv, in order to defragment the full space and try out some new settings which might limit fragmentation, but to no avail. Generally, it works well, but I am finding that my ZFS pool is fragmenting heavily. It is possible to build on this setup and add automatic failover and promotion of the secondary to primary by using FreeBSD's CARP and ifstated mechanisms.I am running a write-heavy application where I am storing my Postgresql on ZFS. If you use the config file info in the gist, it will then continue to use PostgreSQL's built-in streaming replication to keep itself up to date. It's usage is pretty simple (on secondary): $ sudo /path/to/reset_secondary.sh Īfter the script has run, the secondary server will be running off of an up-to-date ZFS snapshot. The reset_secondary.sh script (and ancillary config file info) is on Github. In order to take advantage of efficient incremental replication, the secondary database server must first transfer an initial snapshot of the ZFS filesystem holding the current PostgreSQL data directory (this builds on the first requisite above, to be run on the secondary server): $ sudo ssh -i /home/zfssync/.ssh/id_rsa zfs snapshot sudo ssh -i /home/zfssync/.ssh/id_rsa zfs send -Rv | sudo zfs recv -Fv data/pgsql Coup de grâce pgpass file for your operating system user. You can use md5 for the METHOD, but you must then setup the. Then, in pg_hba.conf: host all replicator 10.12.2.0/24 trust Postgres=# create user replicator with replication Here's how to set up a PostgreSQL database user with the replication role: $ psql -U pgsql postgres The calls to pg_start_backup and pg_stop_backup require elevated privileges in PostgreSQL. $ sudo zfs allow -u zfssync create,mount,snapshot,send,receive,hold data/pgsql PostgreSQL User # Copy /home/zfssync/.ssh to your secondary database server Your public key has been saved in /home/zfssync/.ssh/id_rsa.pub.Ĥ8:da:0d:68:2e:66:96:ee:d8:ba:fc:6d:a3:b6:dd:8d key's randomart image is: Your identification has been saved in /home/zfssync/.ssh/id_rsa. $ sudo pw useradd -n zfssync -u 6000 -g 6000 -mĮnter file in which to save the key (/home/zfssync/.ssh/id_rsa):Įnter passphrase (empty for no passphrase): In order to take a ZFS snapshot on the remote primary server and initiate replication back to the secondary, you need to have an operating system user setup with a password-less SSH key (on the primary database server): $ sudo pw groupadd -n zfssync -g 6000 There are moving parts to this solution, so you'll need to do some leg work before being able to use the script. ![]()
0 Comments
Leave a Reply. |