Hack The North 2.0

This weekend, I decided to attend Hack The North organised by the DWP Digital Team. A two-day event where like-minded people gathered together to identify and attempt to address the difficulties or hardships that individuals face when using support services offered by organisations in and around Manchester. This all-night event allows attendees to pitch their ideas and collaborate with others to implement a proof of concept which addresses these important issues.

This was the first event of its kind I’ve attended, and I was pleasantly surprised by how many individuals have the same passion to address the issues and difficulties that vulnerable users of the support services face. People from many different backgrounds attended; technical and non-technical minds combined to identify projects which would benefit the people in the region.

After arriving at the event, and receiving a Free MongoDB T-Shirt, without hesitation, I dived into the free food and drinks which the sponsors MongoDB and Northncoders provided. We then all took our places as the introductions from Dan (@dtanham) & Rachel (@racheljanewoods) from DWP Digital took place. The scene had been set, and the expected outcomes of the event presented.

Northcoders

Amul Batra (@amul5) from Northcoders spent time presenting the amazing work their courses have been doing to train and qualify candidates as Software Engineers. He also told personal stories of how graduates from different (and sometimes desperate) backgrounds undertook Northcoder’s courses to improve their personal circumstances, with 95% of the graduates managing to undertake a role in the industry as a Software Engineer. This was certainly positive news to hear, and importantly that some organisations like Northcoders were already starting to make a positive impact in Manchester.

Job Centre Plus

Members of the Job Centre Plus Manchester team, Irene and Tim, introduced the DWP and continued to discuss the difficulties and problems that individuals who use Job Centre Plus face when trying to access important services. They also presented individual examples of use cases and blockers that they faced as part of their roles. This was a great opportunity for attendees like myself to hear first-hand stories and look for opportunities in which I could personally help.

A question and answer session took place, where attendees could submit their questions (anonymously) for members of the panel to answer. Many important questions on the issues support services face, along with how current processes in the Job Centre’s take place. It was a good opportunity for the members of JCP to put forward their ideas and experiences.

MongoDB Sponsors

We had further introductions from Joe Drumgoole (@jdrumgoole) from MongoDB about the event format, and how proceedings would take place. We were then given 15 minutes to discover ideas to pitch to the rest of the group. To facilitate the pitching, funny items such as a Christmas Hat, Antlers and an umbrella were used to ensure group members, who were not pitching ideas, could remember the ideas of those who did stand up and pitch.

Pitching & Teams

After attendees who had ideas/solutions pitched their ideas to other members of the group. Those without an idea were given the opportunity to pick a team based on how they personally felt towards the solutions which had been pitched. I personally didn’t pick a team, as I would be unavailable for the full event. So decided to join the Job Centre Plus team who would be answering questions and helping teams in the discovery phase. This was a great opportunity for me to speak to many different individuals in the room, and discover the projects they were undertaking.

The teams had been established and after a demonstration of MongoDB Atlas and Stitch Apps from Max – breakouts began to happen. Sticky notes and white-boards began to appear and as I walked around the room, there was a buzz of excitement as teams began to thrash out their ideas.

Discussions took place for several hours, with some teams starting to begin development. Much needed supplie (Pizza and Beer) arrived, and as people took breaks to consume the vast amounts of Pizza. I manage to carry on discussing the ideas and how they would be implemented with several individuals. There was definitely a sense of excitement in the air.

The night was still young…

By this point, I decided to call it an evening and head for my train, leaving many attendees still coding into the midnight hours. I planned to return the following morning to see what development had occurred over the night!

To be continued…

Docker stop all Containers

Last brain dump of today. Here’s some quick commands to stop running docker containers and also clean up afterwards (remove container files, and images).

Stop all containers:

docker stop $(docker ps -aq)

Remove all containers:

docker rm $(docker ps -aq)

Remove all images:

docker rmi $(docker images -q)

GitLab : Build Docker Image within CI/CD Pipeline

Another brain dump for future reference. This is when setting up gitlab to build and run docker images when the CI/CD pipeline runs.

Install gitlab-runner

On a linux x86-64 download the gitlab-runner package:

sudo wget -O /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64

Give it permissions:

sudo chmod +x /usr/local/bin/gitlab-runner

Install docker:

curl -sSL https://get.docker.com/ | sh

Create the gitlab-runner user:

sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash

Install gitlab-runner:

sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner

Start the gitlab-runner:

sudo gitlab-runner start

Register the gitlab-runner

Register the gitlab-runner:

sudo gitlab-runner register -n \
  --url https://gitlab.com/ \
  --registration-token REGISTRATION_TOKEN \
  --executor shell \
  --description "My Runner"

Add the gitlab-runner user to the docker group:

sudo usermod -aG docker gitlab-runner

Verify the gitlab-runner has docker access:

sudo -u gitlab-runner -H docker info

Creating your .gitlab-ci-yml

You can now test the runner by committing the .gitlab-ci.yml and testing:

before_script:
  - docker info

build_image:
  script:
    - docker build -t my-docker-image .
    - docker run my-docker-image /script/to/run/tests

References:
https://docs.gitlab.com/runner/install/linux-manually.html
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html

cPanel : “The Exim database is most likely corrupted and the following steps should be followed”

If you are seeing the following error in the exim logs, you will need to reset the exim databases.

“The Exim database is most likely corrupted and the following steps should be followed”

To reset exim’s database of retry, reject, and wait-report_smtp attempts on cPanel, I find the safest way is to run the following commands.

/usr/sbin/exim_tidydb -t 1d /var/spool/exim retry > /dev/null
/usr/sbin/exim_tidydb -t 1d /var/spool/exim reject > /dev/null
/usr/sbin/exim_tidydb -t 1d /var/spool/exim wait-remote_smtp > /dev/null
service exim restart

You can change the duration of the cleanup (from 1d to 2d etc).

This issue usually affects emails from domains like gmail, hotmail, aol etc.

CloudLinux LVE Manager displays no statistics (lveinfo)

Another little fix for a issue I came across this week relating to CloudLinux’s LVEstats2

I had a server running 100% CPU, and doing an huge amount of read/write I/O – causing issues with the SAN shelf. After looking at top, I noticed the LVE process (which collects usage data on users) was consuming most of the CPU, and having a lot of read/write to the disk.

After some investigation, and looking at the lve sqlite database (/var/lve/lvestats2.db) it was apparent that LVE wasn’t updating the database correctly, and we can assume the database was corrupted. So I could then assume that users were not being restricted, and being able to abuse all the resources available – enhancing the issue further.

I found the following fixed the issue, and for good look we rebooted the server (to ensure LVE attached itself to Apache, MySQL etc on boot).

Stop LVEstats:

service lvestats stop

Backup the old lvestats database:

mv /var/lve/lvestats2.db{,.old}

Create a new database file:

lve-create-db --recreate

Start LVEStats:

service lvestats start

For good luck, reboot the server.

This then fixed LVEstats, and the CPU and I/O loads resumed to normal.

I hope this helps anyone else running CloudLinux’s LVEStats. Dan

npm install : Killed (Ubuntu 16.04)

While installing packages via npm, it failed with just the message “Killed”. Automatically this triggers me to believe it is memory related. I was after all running the VM with only 1G memory.

Fix npm install Killed

So to resolve this, you need to create and extend a swap file.

You can do this in Ubuntu 14.04 and 16.04 with the following commands:

sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo swapon --show
sudo cp /etc/fstab /etc/fstab.bak
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
sudo sysctl vm.swappiness=10
echo 'vm.swappiness=10' | sudo tee -a /etc/sysctl.conf
sudo sysctl vm.vfs_cache_pressure=50
echo 'vm.vfs_cache_pressure=50' | sudo tee -a /etc/sysctl.conf

As always, I hope this helps anyone else with the same issue!

cPanel Error:The system experienced the following error when it attempted to install the “OWASP ModSecurity Core Rule Set V3.0” vendor

I’ve noticed that since upgrading cPanel to v68.0.28 our ModSecurity Vendors have dropped off, and no longer available in the interface, and the rules are no longer available for Apache.

When trying to add the OWASP Ruleset (Vendor) back, I get the following error message.

Error:The system experienced the following error when it attempted to install the “OWASP ModSecurity Core Rule Set V3.0” vendor: API failure: The system could not validate the new Apache configuration because httpd exited with a nonzero value. Apache produced the following error: httpd: Syntax error on line 208 of /etc/apache2/conf/httpd.conf: Syntax error on line 32 of /etc/apache2/conf.d/modsec2.conf: Syntax error on line 29 of /etc/apache2/conf.d/modsec/modsec2.cpanel.conf: Could not open configuration file /etc/apache2/conf.d/modsec_vendor_configs/OWASP/modsecurity_crs_10_setup.conf: No such file or directory

The fix!

The fix is to edit /var/cpanel/modsec_cpanel_conf_datastore and remove all the active configs. For example, remove all of these active_configs and active_vendors.

$ nano /var/cpanel/modsec_cpanel_conf_datastore

So it looks like this:

* Remember to leave the top line in : ‘—‘

Then go back to WHM, and you should be able to install the Vendors!

I hope this fixes it for you. Remember to backup the /var/cpanel/modsec_cpanel_conf_datastore file.

Install PHP 5.6 on CentOS/RHEL 7 via YUM (Webtatic and IUS)

Again, another brain dump for future use. A stock installation of CentOS 7 will be packaged with PHP 5.4 which is now end of life. This is how to install PHP 5.6, which is currently only receiving security updates.

Side note: These commands install the basic PHP requirements for Magento.

Installing PHP 5.6 on CentOS 7 via Webtatic

$ rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
$ rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm
$ yum install php56w php56w-opcache php56w-xml php56w-mcrypt php56w-gd php56w-devel php56w-mysql php56w-intl php56w-mbstring php56w-bcmath php56w-soap

Further Documentation: https://webtatic.com/packages/php56/

Installing PHP 5.6 on CentOS 7 via IUS

yum -y install epel-release
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
wget https://centos6.iuscommunity.org/ius-release.rpm
rpm -Uvh ius-release*.rpm
yum -y update
yum -y install php56u php56u-opcache php56u-xml php56u-mcrypt php56u-gd php56u-devel php56u-mysql php56u-intl php56u-mbstring php56u-bcmath php56u-soap

You can also install php-fpm via these repositories. For example: yum install php56w-fpm (webtatic) or yum install php56u-fpm (ius)

Centovacast – Getting Listener Statistics via MySQL

We ran into a little issue with our Centovacast installation last month. It turns out, that if you have a few large radio stations using the same server, the MySQL Database tables get rather full (20 million rows), and when trying to pull the data back into the Centovacast interface was causing some issues (timeout and 500 errors etc). Which ultimately meant our customers could not retrieve the statistics they needed.

So, the only solution was to manually query the tables, and generate our own statistics to provide to our customers. I wrote a little PHP/HTML interface for this, however you can easily do this via an MySQL Client.

Here’s what the customers were requesting, and the SQL queries to get them!

SQL Time!

Replace the accountid=269 in the queries below with the account you wish to get the data for.

You can get the accountid from the acounts table:

SELECT * FROM centovacast.accounts;

# Total sessions

SELECT COUNT(*) AS sessions FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# Total listener seconds

SELECT SUM(duration) as duration FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# Average session length (seconds)

SELECT AVG(duration) as seconds FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# Total data transfer (KB)

SELECT SUM(bandwidth) as bandwidth FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# Average transfer (KB)

SELECT AVG(bandwidth) as bandwidth FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# Unique Listeners

SELECT COUNT(DISTINCT ipaddress) FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# Unique Countries

SELECT COUNT(DISTINCT country) FROM visitorstats_sessions WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269;

# ASCAP music sessions

SELECT COUNT(*) FROM centovacast.playbackstats_tracks WHERE starttime > '2017-12-25' AND starttime < '2017-12-31' AND accountid=269 GROUP BY DAY(starttime), HOUR(starttime), name;

 

With most of my blog posts, I just write this stuff down for future reference. However I hope it’s helped at least someone in the same situation!

Meltdown and Spectre : Patching Linux

Here’s a quick guide on how to patch some of the many Linux distros against the Meltdown and Spectre vulnerabilities! After spending the week monitoring each distribution, and deciding the best time to patch (after waiting for results).

Don’t forget to reboot your machine/server after applying the updates!


CentOS 7

x86_64

$ sudo yum clean all && yum install kernel-3.10.0-693.11.6.el7.x86_64
$ sudo reboot

Patched Kernel : kernel-3.10.0-693.11.6.el7.x86_64

Source: https://lists.centos.org/pipermail/centos-announce/2018-January/022696.html


CentOS 6

x86_64

$ sudo yum clean all && yum install kernel-2.6.32-696.18.7.el6.x86_64
$ sudo reboot

Patched Kernel : kernel-2.6.32-696.18.7.el6.x86_64

i386

$ sudo yum clean all && yum install kernel-2.6.32-696.18.7.el6.i686
$ sudo reboot

Patched Kernel : kernel-2.6.32-696.18.7.el6.i686

Source: https://lists.centos.org/pipermail/centos-announce/2018-January/022701.html


Ubuntu

x86_64

$ sudo apt-get update
$ sudo apt-get dist-upgrade
$ sudo reboot

Patched 16.04 LTS Kernel : linux-image-4.4.0-108-generic

Source: https://usn.ubuntu.com/usn/usn-3522-1/


Debian

x86_64

$ sudo apt-get update
$ sudo apt-get dist-upgrade
$ sudo reboot

Patched Kernel : linux-image-4.9.0-5-amd64

Source: https://packages.debian.org/stretch/kernel/linux-image-4.9.0-5-amd64


CloudLinux 7

x86_64

$ yum clean all --enablerepo=cloudlinux-updates-testing && yum update linux-firmware microcode_ctl && yum install kernel-3.10.0-714.10.2.lve1.5.8.el7 --enablerepo=cloudlinux-updates-testing
$ reboot

Patched Kernel : kernel-3.10.0-714.10.2.lve1.5.8.el7

Source: https://www.cloudlinux.com/cloudlinux-os-blog/entry/intel-cpu-bug-kernelcare-and-cloudlinux


CloudLinux 6

x86_64

$ yum clean all --enablerepo=cloudlinux-updates-testing && yum install kernel-2.6.32-896.16.1.lve1.4.50.el6 --enablerepo=cloudlinux-updates-testing
$ reboot

Patched Kernel : kernel-2.6.32-896.16.1.lve1.4.50.el6


Source: https://www.cloudlinux.com/cloudlinux-os-blog/entry/cloudlinux-6-kernel-updated-1-5

OpenVZ

x86_64

$ yum install vzkernel-2.6.32-042stab127.2.x86_64
$ reboot

Patched Kernel : vzkernel-2.6.32-042stab127.2.x86_64.rpm

Source: https://openvz.org/Download/kernel/rhel6/042stab127.2


Warning

At the time of posting, the kernel versions were the latest. If a new kernel is available, use the latest version.

If in doubt:

(centos, rhel, fedora, oracle, scientific linux)

$ yum update all

or (debian/ubuntu)

$ apt-get update && apt-get dist-upgrade

I’ll be adding more as I find stable releases.

1 2 3