Docker stop all Containers

Last brain dump of today. Here’s some quick commands to stop running docker containers and also clean up afterwards (remove container files, and images).

Stop all containers:

docker stop $(docker ps -aq)

Remove all containers:

docker rm $(docker ps -aq)

Remove all images:

docker rmi $(docker images -q)

GitLab : Build Docker Image within CI/CD Pipeline

Another brain dump for future reference. This is when setting up gitlab to build and run docker images when the CI/CD pipeline runs.

Install gitlab-runner

On a linux x86-64 download the gitlab-runner package:

sudo wget -O /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64

Give it permissions:

sudo chmod +x /usr/local/bin/gitlab-runner

Install docker:

curl -sSL https://get.docker.com/ | sh

Create the gitlab-runner user:

sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash

Install gitlab-runner:

sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner

Start the gitlab-runner:

sudo gitlab-runner start

Register the gitlab-runner

Register the gitlab-runner:

sudo gitlab-runner register -n \
  --url https://gitlab.com/ \
  --registration-token REGISTRATION_TOKEN \
  --executor shell \
  --description "My Runner"

Add the gitlab-runner user to the docker group:

sudo usermod -aG docker gitlab-runner

Verify the gitlab-runner has docker access:

sudo -u gitlab-runner -H docker info

Creating your .gitlab-ci-yml

You can now test the runner by committing the .gitlab-ci.yml and testing:

before_script:
  - docker info

build_image:
  script:
    - docker build -t my-docker-image .
    - docker run my-docker-image /script/to/run/tests

References:
https://docs.gitlab.com/runner/install/linux-manually.html
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html

cPanel : “The Exim database is most likely corrupted and the following steps should be followed”

If you are seeing the following error in the exim logs, you will need to reset the exim databases.

“The Exim database is most likely corrupted and the following steps should be followed”

To reset exim’s database of retry, reject, and wait-report_smtp attempts on cPanel, I find the safest way is to run the following commands.

/usr/sbin/exim_tidydb -t 1d /var/spool/exim retry > /dev/null
/usr/sbin/exim_tidydb -t 1d /var/spool/exim reject > /dev/null
/usr/sbin/exim_tidydb -t 1d /var/spool/exim wait-remote_smtp > /dev/null
service exim restart

You can change the duration of the cleanup (from 1d to 2d etc).

This issue usually affects emails from domains like gmail, hotmail, aol etc.

CloudLinux LVE Manager displays no statistics (lveinfo)

Another little fix for a issue I came across this week relating to CloudLinux’s LVEstats2

I had a server running 100% CPU, and doing an huge amount of read/write I/O – causing issues with the SAN shelf. After looking at top, I noticed the LVE process (which collects usage data on users) was consuming most of the CPU, and having a lot of read/write to the disk.

After some investigation, and looking at the lve sqlite database (/var/lve/lvestats2.db) it was apparent that LVE wasn’t updating the database correctly, and we can assume the database was corrupted. So I could then assume that users were not being restricted, and being able to abuse all the resources available – enhancing the issue further.

I found the following fixed the issue, and for good look we rebooted the server (to ensure LVE attached itself to Apache, MySQL etc on boot).

Stop LVEstats:

service lvestats stop

Backup the old lvestats database:

mv /var/lve/lvestats2.db{,.old}

Create a new database file:

lve-create-db --recreate

Start LVEStats:

service lvestats start

For good luck, reboot the server.

This then fixed LVEstats, and the CPU and I/O loads resumed to normal.

I hope this helps anyone else running CloudLinux’s LVEStats. Dan

npm install : Killed (Ubuntu 16.04)

While installing packages via npm, it failed with just the message “Killed”. Automatically this triggers me to believe it is memory related. I was after all running the VM with only 1G memory.

Fix npm install Killed

So to resolve this, you need to create and extend a swap file.

You can do this in Ubuntu 14.04 and 16.04 with the following commands:

sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo swapon --show
sudo cp /etc/fstab /etc/fstab.bak
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
sudo sysctl vm.swappiness=10
echo 'vm.swappiness=10' | sudo tee -a /etc/sysctl.conf
sudo sysctl vm.vfs_cache_pressure=50
echo 'vm.vfs_cache_pressure=50' | sudo tee -a /etc/sysctl.conf

As always, I hope this helps anyone else with the same issue!

cPanel Error:The system experienced the following error when it attempted to install the “OWASP ModSecurity Core Rule Set V3.0” vendor

I’ve noticed that since upgrading cPanel to v68.0.28 our ModSecurity Vendors have dropped off, and no longer available in the interface, and the rules are no longer available for Apache.

When trying to add the OWASP Ruleset (Vendor) back, I get the following error message.

Error:The system experienced the following error when it attempted to install the “OWASP ModSecurity Core Rule Set V3.0” vendor: API failure: The system could not validate the new Apache configuration because httpd exited with a nonzero value. Apache produced the following error: httpd: Syntax error on line 208 of /etc/apache2/conf/httpd.conf: Syntax error on line 32 of /etc/apache2/conf.d/modsec2.conf: Syntax error on line 29 of /etc/apache2/conf.d/modsec/modsec2.cpanel.conf: Could not open configuration file /etc/apache2/conf.d/modsec_vendor_configs/OWASP/modsecurity_crs_10_setup.conf: No such file or directory

The fix!

The fix is to edit /var/cpanel/modsec_cpanel_conf_datastore and remove all the active configs. For example, remove all of these active_configs and active_vendors.

$ nano /var/cpanel/modsec_cpanel_conf_datastore

So it looks like this:

* Remember to leave the top line in : ‘—‘

Then go back to WHM, and you should be able to install the Vendors!

I hope this fixes it for you. Remember to backup the /var/cpanel/modsec_cpanel_conf_datastore file.

Install PHP 5.6 on CentOS/RHEL 7 via YUM (Webtatic and IUS)

Again, another brain dump for future use. A stock installation of CentOS 7 will be packaged with PHP 5.4 which is now end of life. This is how to install PHP 5.6, which is currently only receiving security updates.

Side note: These commands install the basic PHP requirements for Magento.

Installing PHP 5.6 on CentOS 7 via Webtatic

$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
$ rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm
$ yum install php56w php56w-opcache php56w-xml php56w-mcrypt php56w-gd php56w-devel php56w-mysql php56w-intl php56w-mbstring php56w-bcmath php56w-soap

Further Documentation: https://webtatic.com/packages/php56/

Installing PHP 5.6 on CentOS 7 via IUS

yum -y install epel-release
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
wget https://centos6.iuscommunity.org/ius-release.rpm
rpm -Uvh ius-release*.rpm
yum -y update
yum -y install php56u php56u-opcache php56u-xml php56u-mcrypt php56u-gd php56u-devel php56u-mysql php56u-intl php56u-mbstring php56u-bcmath php56u-soap

You can also install php-fpm via these repositories. For example: yum install php56w-fpm (webtatic) or yum install php56u-fpm (ius)

Centovacast – Getting Listener Statistics via MySQL

We ran into a little issue with our Centovacast installation last month. It turns out, that if you have a few large radio stations using the same server, the MySQL Database tables get rather full (20 million rows), and when trying to pull the data back into the Centovacast interface was causing some issues (timeout and 500 errors etc). Which ultimately meant our customers could not retrieve the statistics they needed.

So, the only solution was to manually query the tables, and generate our own statistics to provide to our customers. I wrote a little PHP/HTML interface for this, however you can easily do this via an MySQL Client.

Here’s what the customers were requesting, and the SQL queries to get them!

SQL Time!

Replace the accountid=269 in the queries below with the account you wish to get the data for.

You can get the accountid from the acounts table:

SELECT * FROM centovacast.accounts;

# Total sessions

SELECT COUNT(*) AS sessions FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# Total listener seconds

SELECT SUM(duration) as duration FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# Average session length (seconds)

SELECT AVG(duration) as seconds FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# Total data transfer (KB)

SELECT SUM(bandwidth) as bandwidth FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# Average transfer (KB)

SELECT AVG(bandwidth) as bandwidth FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# Unique Listeners

SELECT COUNT(DISTINCT ipaddress) FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# Unique Countries

SELECT COUNT(DISTINCT country) FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# ASCAP music sessions

SELECT COUNT(*) FROM centovacast.playbackstats_tracks WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269 GROUP BY DAY(starttime), HOUR(starttime), name;

 

With most of my blog posts, I just write this stuff down for future reference. However I hope it’s helped at least someone in the same situation!

Meltdown and Spectre : Patching Linux

Here’s a quick guide on how to patch some of the many Linux distros against the Meltdown and Spectre vulnerabilities! After spending the week monitoring each distribution, and deciding the best time to patch (after waiting for results).

Don’t forget to reboot your machine/server after applying the updates!


CentOS 7

x86_64

$ sudo yum clean all && yum install kernel-3.10.0-693.11.6.el7.x86_64
$ sudo reboot

Patched Kernel : kernel-3.10.0-693.11.6.el7.x86_64

Source: https://lists.centos.org/pipermail/centos-announce/2018-January/022696.html


CentOS 6

x86_64

$ sudo yum clean all && yum install kernel-2.6.32-696.18.7.el6.x86_64
$ sudo reboot

Patched Kernel : kernel-2.6.32-696.18.7.el6.x86_64

i386

$ sudo yum clean all && yum install kernel-2.6.32-696.18.7.el6.i686
$ sudo reboot

Patched Kernel : kernel-2.6.32-696.18.7.el6.i686

Source: https://lists.centos.org/pipermail/centos-announce/2018-January/022701.html


Ubuntu

x86_64

$ sudo apt-get update
$ sudo apt-get dist-upgrade
$ sudo reboot

Patched 16.04 LTS Kernel : linux-image-4.4.0-108-generic

Source: https://usn.ubuntu.com/usn/usn-3522-1/


Debian

x86_64

$ sudo apt-get update
$ sudo apt-get dist-upgrade
$ sudo reboot

Patched Kernel : linux-image-4.9.0-5-amd64

Source: https://packages.debian.org/stretch/kernel/linux-image-4.9.0-5-amd64


CloudLinux 7

x86_64

$ yum clean all --enablerepo=cloudlinux-updates-testing && yum update linux-firmware microcode_ctl && yum install kernel-3.10.0-714.10.2.lve1.5.8.el7 --enablerepo=cloudlinux-updates-testing
$ reboot

Patched Kernel : kernel-3.10.0-714.10.2.lve1.5.8.el7

Source: https://www.cloudlinux.com/cloudlinux-os-blog/entry/intel-cpu-bug-kernelcare-and-cloudlinux


CloudLinux 6

x86_64

$ yum clean all --enablerepo=cloudlinux-updates-testing && yum install kernel-2.6.32-896.16.1.lve1.4.50.el6 --enablerepo=cloudlinux-updates-testing
$ reboot

Patched Kernel : kernel-2.6.32-896.16.1.lve1.4.50.el6


Source: https://www.cloudlinux.com/cloudlinux-os-blog/entry/cloudlinux-6-kernel-updated-1-5

OpenVZ

x86_64

$ yum install vzkernel-2.6.32-042stab127.2.x86_64
$ reboot

Patched Kernel : vzkernel-2.6.32-042stab127.2.x86_64.rpm

Source: https://openvz.org/Download/kernel/rhel6/042stab127.2


Warning

At the time of posting, the kernel versions were the latest. If a new kernel is available, use the latest version.

If in doubt:

(centos, rhel, fedora, oracle, scientific linux)

$ yum update all

or (debian/ubuntu)

$ apt-get update && apt-get dist-upgrade

I’ll be adding more as I find stable releases.

OpenVZ – Hostnames & Systemd (ovzhostname.service)

The problem?

For weeks, I’ve been battling with an issue with a new CentOS 7 template for cPanel and Plesk, I built for the OpenVZ hypervisor. Even when setting the HOSTNAME=<hostname> in the /etc/vz/<CTID>.conf the container still rebooted with the hostname which was used when the template was created. Meaning the new and correct hostname would never be remembered. Causing various issues with BIND, Apache etc.

Even trying to set the hostname with the following failed!

[email protected] /]# hostnamectl set-hostname server2.host.com
Could not set property: Activation of org.freedesktop.hostname1 timed out

Today it finally clicked! It all starts with the with the base template I downloaded from https://openvz.org/Download/template/precreated

It seems that embedded in the template is a Systemd script which sets the hostname when the container starts.

The solution?

The hostname is taken from /etc/sysconfig/ovzhostname within the container. So you have a few options.

  1. Set the hostname in  /etc/sysconfig/ovzhostname
  2. Disable the ovzhostname service
    [email protected] /]# systemctl disable ovzhostname.service

This will then hopefully mean that when the container restarts, the hostname will be persistent.

 

Further reading…

There isn’t a lot I can find about this, apart from: https://lists.openvz.org/pipermail/users/2016-November/007204.html

Here’s the ovzhostname script so you can see what it’s doing:

#!/bin/bash

OVZHOSTNAME=""

if [[ -f /etc/sysconfig/ovzhostname ]]; then
source /etc/sysconfig/ovzhostname
if [[ -n "${OVZHOSTNAME}" ]]; then
echo "${OVZHOSTNAME}" > /etc/hostname
hostname "${OVZHOSTNAME}"
hostnamectl set-hostname "${OVZHOSTNAME}"
fi
fi

1 2 3