PureMessage for UNIX: What PureMessage directories grow on a typical PMX installation?

  • Article ID: 39316
  • Updated: 15 Jul 2011

This is a list of directories that tend to grow in a typical PureMessage for UNIX installation:

/opt/pmx/var/logs/message_log
Log of each mail transfer agent (MTA) message ID, with recipients, spam probability score and reason. (Rotated according to the configuration files in the /opt/pmx/etc/logrotate.d directory.)

/opt/pmx/var/logs/pmx_log
Log of PureMessage processes and their outputs, including milter, quarantine, pmx-queue, scheduler (rotated according to the configuration files in the /opt/pmx/etc/logrotate.d directory.) This grows quickly if the debug_level setting in your pmx-config output is set high (Default is 0).  You can check this by running "pmx-config | grep debug"

/opt/pmx/var/qdir/cur
Subdirectories inside of the location where all the quarantined messages are kept (the size you can expect here is approximately 1 GB per 100 KB of messages, which you can check using the "pmx-quarantine count" command). The quarantine is trimmed by the pmx-qexpire process in the scheduler, according to the number of days specified by the --days switch. Also if the --archive <path> option is set, then, by default /opt/pmx/home/archive, is where the pmx-qexpire process zips and moves the expired quarantine. If you do not require to archive the quarantine, change the pmx-qexpire in Scheduled Jobs to --noarchive.

/opt/pmx/postgres/var/data
Subdirectories inside of the location where the database keeps its files. This is trimmed with the PureMessage user's vacuum cron job. The size you can expect is about 1.5 GB per 100k messages. If it is far larger than that, database tuning parameters or dump/restore instructions can be used to reduce the size.

/opt/pmx/home/archive
The quarantine archive feature is on by default in PureMessage. The archived quarantine messages get archived in this directory. This can easily take up many gigabytes on a server with heavy mail flow. 

 
If you need more information or guidance, then please contact technical support.

Rate this article

Very poor Excellent

Comments