cPanel : “The Exim database is most likely corrupted and the following steps should be followed”

If you are seeing the following error in the exim logs, you will need to reset the exim databases.

“The Exim database is most likely corrupted and the following steps should be followed”

To reset exim’s database of retry, reject, and wait-report_smtp attempts on cPanel, I find the safest way is to run the following commands.

/usr/sbin/exim_tidydb -t 1d /var/spool/exim retry > /dev/null
/usr/sbin/exim_tidydb -t 1d /var/spool/exim reject > /dev/null
/usr/sbin/exim_tidydb -t 1d /var/spool/exim wait-remote_smtp > /dev/null
service exim restart

You can change the duration of the cleanup (from 1d to 2d etc).

This issue usually affects emails from domains like gmail, hotmail, aol etc.

CloudLinux LVE Manager displays no statistics (lveinfo)

Another little fix for a issue I came across this week relating to CloudLinux’s LVEstats2

I had a server running 100% CPU, and doing an huge amount of read/write I/O – causing issues with the SAN shelf. After looking at top, I noticed the LVE process (which collects usage data on users) was consuming most of the CPU, and having a lot of read/write to the disk.

After some investigation, and looking at the lve sqlite database (/var/lve/lvestats2.db) it was apparent that LVE wasn’t updating the database correctly, and we can assume the database was corrupted. So I could then assume that users were not being restricted, and being able to abuse all the resources available – enhancing the issue further.

I found the following fixed the issue, and for good look we rebooted the server (to ensure LVE attached itself to Apache, MySQL etc on boot).

Stop LVEstats:

service lvestats stop

Backup the old lvestats database:

mv /var/lve/lvestats2.db{,.old}

Create a new database file:

lve-create-db --recreate

Start LVEStats:

service lvestats start

For good luck, reboot the server.

This then fixed LVEstats, and the CPU and I/O loads resumed to normal.

I hope this helps anyone else running CloudLinux’s LVEStats. Dan

npm install : Killed (Ubuntu 16.04)

While installing packages via npm, it failed with just the message “Killed”. Automatically this triggers me to believe it is memory related. I was after all running the VM with only 1G memory.

Fix npm install Killed

So to resolve this, you need to create and extend a swap file.

You can do this in Ubuntu 14.04 and 16.04 with the following commands:

sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo swapon --show
sudo cp /etc/fstab /etc/fstab.bak
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
sudo sysctl vm.swappiness=10
echo 'vm.swappiness=10' | sudo tee -a /etc/sysctl.conf
sudo sysctl vm.vfs_cache_pressure=50
echo 'vm.vfs_cache_pressure=50' | sudo tee -a /etc/sysctl.conf

As always, I hope this helps anyone else with the same issue!

cPanel Error:The system experienced the following error when it attempted to install the “OWASP ModSecurity Core Rule Set V3.0” vendor

I’ve noticed that since upgrading cPanel to v68.0.28 our ModSecurity Vendors have dropped off, and no longer available in the interface, and the rules are no longer available for Apache.

When trying to add the OWASP Ruleset (Vendor) back, I get the following error message.

Error:The system experienced the following error when it attempted to install the “OWASP ModSecurity Core Rule Set V3.0” vendor: API failure: The system could not validate the new Apache configuration because httpd exited with a nonzero value. Apache produced the following error: httpd: Syntax error on line 208 of /etc/apache2/conf/httpd.conf: Syntax error on line 32 of /etc/apache2/conf.d/modsec2.conf: Syntax error on line 29 of /etc/apache2/conf.d/modsec/modsec2.cpanel.conf: Could not open configuration file /etc/apache2/conf.d/modsec_vendor_configs/OWASP/modsecurity_crs_10_setup.conf: No such file or directory

The fix!

The fix is to edit /var/cpanel/modsec_cpanel_conf_datastore and remove all the active configs. For example, remove all of these active_configs and active_vendors.

$ nano /var/cpanel/modsec_cpanel_conf_datastore

So it looks like this:

* Remember to leave the top line in : ‘—‘

Then go back to WHM, and you should be able to install the Vendors!

I hope this fixes it for you. Remember to backup the /var/cpanel/modsec_cpanel_conf_datastore file.

Install PHP 5.6 on CentOS/RHEL 7 via YUM (Webtatic and IUS)

Again, another brain dump for future use. A stock installation of CentOS 7 will be packaged with PHP 5.4 which is now end of life. This is how to install PHP 5.6, which is currently only receiving security updates.

Side note: These commands install the basic PHP requirements for Magento.

Installing PHP 5.6 on CentOS 7 via Webtatic

$ rpm -Uvh
$ rpm -Uvh
$ yum install php56w php56w-opcache php56w-xml php56w-mcrypt php56w-gd php56w-devel php56w-mysql php56w-intl php56w-mbstring php56w-bcmath php56w-soap

Further Documentation:

Installing PHP 5.6 on CentOS 7 via IUS

yum -y install epel-release
rpm -Uvh ius-release*.rpm
yum -y update
yum -y install php56u php56u-opcache php56u-xml php56u-mcrypt php56u-gd php56u-devel php56u-mysql php56u-intl php56u-mbstring php56u-bcmath php56u-soap

You can also install php-fpm via these repositories. For example: yum install php56w-fpm (webtatic) or yum install php56u-fpm (ius)

OpenVZ – Hostnames & Systemd (ovzhostname.service)

The problem?

For weeks, I’ve been battling with an issue with a new CentOS 7 template for cPanel and Plesk, I built for the OpenVZ hypervisor. Even when setting the HOSTNAME=<hostname> in the /etc/vz/<CTID>.conf the container still rebooted with the hostname which was used when the template was created. Meaning the new and correct hostname would never be remembered. Causing various issues with BIND, Apache etc.

Even trying to set the hostname with the following failed!

[email protected] /]# hostnamectl set-hostname
Could not set property: Activation of org.freedesktop.hostname1 timed out

Today it finally clicked! It all starts with the with the base template I downloaded from

It seems that embedded in the template is a Systemd script which sets the hostname when the container starts.

The solution?

The hostname is taken from /etc/sysconfig/ovzhostname within the container. So you have a few options.

  1. Set the hostname in  /etc/sysconfig/ovzhostname
  2. Disable the ovzhostname service
    [email protected] /]# systemctl disable ovzhostname.service

This will then hopefully mean that when the container restarts, the hostname will be persistent.


Further reading…

There isn’t a lot I can find about this, apart from:

Here’s the ovzhostname script so you can see what it’s doing:



if [[ -f /etc/sysconfig/ovzhostname ]]; then
source /etc/sysconfig/ovzhostname
if [[ -n "${OVZHOSTNAME}" ]]; then
echo "${OVZHOSTNAME}" > /etc/hostname
hostname "${OVZHOSTNAME}"
hostnamectl set-hostname "${OVZHOSTNAME}"

Comodo WAF: mod_security2: Failed to write to DBM file “/var/cache/modsecurity/ip”: Invalid argument

After seeing apache using all it’s threads, and connections not timing out as they should, I looked at the apache error_log and found the following error.

Message: collection_store: Failed to write to DBM file "/var/cache/modsecurity/ip": Invalid argument

I not only saw this on cPanel servers, but on Plesk and plain LAMP (with mod_security and comodo waf installed).

It looks like Comodo somehow released a broken update, that caused the /var/cache/modsecurity/ip.pag to corrupt (that’s my guess).

The fix is rather simple. Update your Comodo WAF rules to version 1.142 or higher, and reset the /var/cache/modsecurity/ip.pag file.

Check the rules.dat file for the version number.

$ echo "" > /var/cache/modsecurity/ip.pag

Then restart apache, and this should fix you issue. You’ll also notice apache threads timing out faster.

Plesk Support Article:

Comodo WAF:

Dovecot modseq_hdr.log_offset too large (Plesk)

Another quick fix post!


A mailbox was not receiving mail on a postfix & dovecot on a Plesk server. The following error messages were being shown in the mail log:

$ tail -f /var/log/maillog
Apr 25 15:04:34 server01 dovecot: service=imap, [email protected],
ip=[]. Error:
modseq_hdr.log_offset too large

As you can see the the error is: “modseq_hdr.log_offset too large”.

I’m not sure what caused this (I think it’s related to the dovecot.index file – but not 100% sure), but this quick solution fixed it.


To fix this, delete all dovecot files (config and index files) from the users mail directory:

$ find /var/qmail/mailnames/ -name "dovecot*" -delete

Restart Postfix & Dovecot (to rebuild the dovecot files):

$ service dovecot restart
$ service postfix restart

Warning: This fix removes the dovecot configuration and index files for that specific user. Make sure you back them up before running the above command!

By magic the mailbox began to receive email!

Plesk “The component was not installed” for all services.

This is a really small blog post, but it’s an issue I wanted to share – so hopefully anyone who comes across this issue themselves can avoid the mistake I made!

Basically, we noticed that one of our older Plesk servers seemed to have lost basic functionality like editing DNS zones, or accessing PHPMyAdmin or Webmail.

When going to Tools & Settings > Server Components we noticed that next to all the services (components), there was the message “The component was not installed“.

Interestingly, when accessing the server via SSH, the servers were installed, and functioning correct (websites were accessible, DNS was querying etc).

Sadly, I jumped straight to the wrong conclusion, and thought that a recent automatic update of Plesk had failed, and that the PSA services had been corrupted. I was wrong.

I ran the Plesk bootstrapper. If anyone is familiar with this, you’ll understand the pain I was going through. Two hours later, and the issue was not resolved. What do you try next?

Well, after the bootstrapper failed to fix the issue, I just tried running “yum update” to see if there were any package updates etc.

BOOM! There it was!

Thread died in Berkeley DB library
DB_RUNRECOVERY: Fatal error, run database recovery

Great! That’s a really easy fix.

rm -f /var/lib/rpm/__*
rpm --rebuilddb
yum clean all

Now, check back in Plesk, and all functionality and the Server Components list will be fixed!

R1Soft : GC overhead limit exceeded

I encountered this issue on the R1Soft Web Interface last week, which I had to open a support ticket for with R1Soft’s brilliant support.


When trying to run tasks like a backup, run, or restore I would encounter the error messages as seen above.

The error was down to the maximum amount of memory which was assigned to the Java Heap and PermGen. After contacting R1Soft’s support, I was pointed to a config file where you can adjust the java heap and permgen values.

To increase the java heap:

# Edit /usr/sbin/r1soft/conf/server.conf

  1. Locate the following line at the top: “compute.maxmemory=true” and change it to… “compute.maxmemory=false
  2. Locate the following line: “#maxmemory=” and remove the hash symbol. Set it to use 8192 (in MB). It should now read… “maxmemory=8192
  3. Save the server.conf file.

# Now restart the CDP Server Service.

  1. /etc/init.d/cdp-server stop
  2. /etc/init.d/cdp-server restart


To increase the PermGen Space:

# Edit /usr/sbin/r1soft/conf/server.conf

  1. Locate the following line “additional.10=-XX:MaxPermSize=256m” and change it to… “additional.10=-XX:MaxPermSize=1024m
  2. Save the server.conf file.

# Now restart the CDP Server Service

  1. /etc/init.d/cdp-server stop
  2. /etc/init.d/cdp-server restart