Category: Linux

  • Check rootkit with rkhunter

    rkhunter is a software used to check your server for rootkits. The official site is

    http://rkhunter.sourceforge.net/

    On Ubuntu, install with

    apt install rkhunter

    Before you scan, update rkhunter with

    rkhunter --update

    To check your system for rootkit, run

    rkhunter --check
  • Apache AH00144: couldn’t grab the accept mutex

    On Ubuntu 18.04 server, apache crashed. On checking apache error log, found following

    [Mon Aug 13 23:19:24.625927 2018] [mpm_prefork:emerg] [pid 2378] (43)Identifier removed: AH00144: couldn't grab the accept mutex
    [Mon Aug 13 23:19:24.626990 2018] [mpm_prefork:emerg] [pid 1227] (43)Identifier removed: AH00144: couldn't grab the accept mutex
    [Mon Aug 13 23:19:24.628515 2018] [mpm_prefork:emerg] [pid 1211] (43)Identifier removed: AH00144: couldn't grab the accept mutex
    [Mon Aug 13 23:19:24.628693 2018] [mpm_prefork:emerg] [pid 1309] (43)Identifier removed: AH00144: couldn't grab the accept mutex
    [Mon Aug 13 23:19:24.629122 2018] [mpm_prefork:emerg] [pid 2387] (43)Identifier removed: AH00144: couldn't grab the accept mutex
    [Mon Aug 13 23:19:24.629319 2018] [mpm_prefork:emerg] [pid 1603] (43)Identifier removed: AH00144: couldn't grab the accept mutex
    [Mon Aug 13 23:19:24.629483 2018] [mpm_prefork:emerg] [pid 1637] (43)Identifier removed: AH00144: couldn't grab the accept mutex
    [Mon Aug 13 23:19:24.629659 2018] [mpm_prefork:emerg] [pid 1566] (43)Identifier removed: AH00144: couldn't grab the accept mutex
    [Mon Aug 13 23:19:25.366503 2018] [core:alert] [pid 990] AH00050: Child 1211 returned a Fatal error... Apache is exiting!
    [Mon Aug 13 23:19:25.366568 2018] [:emerg] [pid 990] AH02818: MPM run failed, exiting
    

    To fix the error, edit file

    vi /etc/apache2/apache2.conf
    

    Find

    #Mutex file:${APACHE_LOCK_DIR} default
    

    Replace with

    Mutex posixsem
    

    Restart Apache

    service apache2 restart
    

    See Apache

  • How to Block a Country in CSF firewall

    To block all traffic from a country in CSF Firewall edit file /etc/csf/csf.conf

    vi /etc/csf/csf.conf
    

    Find the line

    CC_DENY = ""
    

    In this line, you can add 2 Letter country code. For example to block China and Russia, add

    CC_DENY = "CN,RU"
    

    Now you need to restart firewall with command

    systemctl restart lfd
    csf -r
    
  • mysqldump

    mysqldump is a command used to backup MySQL databases.

    To take backup, run

    mysqldump --opt DB_NAME > DB_NAME.sql

    To backup with triggers, routines, and events

    mysqldump --opt --triggers --routines --events --single-transaction DB_NAME > DB_NAME.sql

    –opt combines many options. It is same as adding –add-drop-table, –add-locks, –create-options, –disable-keys, –extended-insert, –lock-tables, –quick, and –set-charset.

    –extended-insert option will group together all INSERT operations for a table. This makes the backup file smaller and makes restoration faster. I restored a mysqldump bakcup, it take me 2 hours to restore. Same database backup with –extended-insert option take only 10 minutes to restore. If you want a separate INSERT for each row, then use –skip-extended-insert or –complete-insert option

    Backup All Databases

    To backup all databases, run

    mysqldump --events --routines --triggers --all-databases | gzip -9 > "$(date +%F-%H%m%S)"-mysql-backup.sql.gz

    To backup MySQL databases into separate files, run

    mkdir /root/mysqlbackup/
    for DB in $(mysql -Be "show databases" | grep -v 'row\|information_schema\|Database\|performance_schema') ; do
        mysqldump --opt --events --routines --triggers ${DB}  > /root/mysqlbackup/${DB}.sql
    done

    If you need to compress the sql file. use

    mkdir /root/mysqlbackup/
    for DB in $(mysql -Be "show databases" | grep -v 'row\|information_schema\|Database\|performance_schema') ; do
        mysqldump --skip-lock-tables --events --routines --triggers ${DB} | gzip -9 > /root/mysqlbackup/"$(date +%F-%H%m%S)"-${DB}.sql.gz
    done

    Backup Database Structure only

    mysqldump --no-data DB_NAME > DB_NAME.sql

    Backup Only routines

    mysqldump --routines --no-create-info --no-data --no-create-db --skip-opt DB_NAME > DB_NAME-routines.sql

    Related Posts

  • grep

    To find a string inside files in a folder, use

    grep -rnw ./ -e "STRING_TO_FIND"
    

    Or

    grep -irl "STRING_TO_FIND" ./
    

    Or

    grep -ir 'STRING_TO_FIND' ./ | cat
    

    See ack better grep for programmers

  • Monitor file changes in your Website

    This script can be used to notify you when a file changed in your website. This is useful when you want to know when a file changed or your site is hacked and you to monitor your site for file changes so you know when hacker upload or modify a file.

    First you need to add your web site to GIT.

    This can be with

    cd /var/www/html
    git init
    git add .
    git commit -a -m "inital commit"
    

    Replace /var/www/html with actual DocumentRoot for your web site.

    Every time you modify or add a file, you need to commit it to git you will get alerted. You can commit a new file to git with command

    git add FILE_NAME
    git commit -a
    

    Create a file

    mkdir /usr/serverok/
    vi /usr/serverok/check-files.php
    

    Add following content

    
    

    In the script, replace /var/www/html with actual document root of your web site. Change email and site name with your email and domain name.

    Set following cronjob.

    0 * * * * /usr/bin/php /usr/serverok/check-files.php
    

    Cronjob will run every 1 hour and email you if any file change is detected. You can modify cronjob if you want to monitor more frequently, every 1 hour will be fine for most uses.

    If you have a folder or file that you need to ignore, you can create a file with name ".gitignore" and add path to file/folder in it, git will ignore files/folders listed in it.

  • Restart rsync on failure

    When copying a large site form a shared server using rsync, the rsync process on get killed, this may be done by some program on shared host or server admin manually killing the process.

    Here is a bash script that will check if rsync exited normally or not, then retry the trasfter is rsync failure detected.

    #!/bin/bash
    
    while [ 1 ]
    do
        rsync -avzP [email protected]:/kunden/homepages/18/d686010467/htdocs/jobformazione/ /home/jobformazione/
        if [ "$?" = "0" ] ; then
            echo "rsync completed normally"
            exit
        else
            echo "Rsync failed. Retrying..."
            sleep 180
        fi
    done
    

    Save the file as 1.sh, then run it with

    bash ./1.sh
    

    You need to add servers SSH key in remote server so rsync work with out password.

  • iptv

    https://xtream-codes.com/
    https://www.infomir.eu/eng/solutions/ministra-tv-platform/
    https://flussonic.com/flussonic-media-server
    https://newiq.pl

    Xtream Codes

    DocumentRoot = /home/xtreamcodes/iptv_xtream_codes/wwwdir/ (this is accessable using URL http://IP_ADDR:25461/)

    Client Area can be accessed with

    http://IP_ADDR:25461/client_area/

    Ports used = 25461 (nginx), 25462 (nginx_rtmp), 25463 (nginx)

    root@ds11154:~# netstat -antp | grep LIST | grep ngin
    tcp        0      0 0.0.0.0:31210           0.0.0.0:*               LISTEN      2012/nginx_rtmp 
    tcp        0      0 0.0.0.0:25461           0.0.0.0:*               LISTEN      2015/nginx      
    tcp        0      0 0.0.0.0:25462           0.0.0.0:*               LISTEN      2012/nginx_rtmp 
    tcp        0      0 0.0.0.0:25463           0.0.0.0:*               LISTEN      2015/nginx      
    root@ds11154:~# 
    

    Some useful commands/config

    /home/xtreamcodes/iptv_xtream_codes/nginx/conf/nginx.conf
    /home/xtreamcodes/iptv_xtream_codes/start_services.sh
    /home/xtreamcodes/iptv_xtream_codes/nginx/sbin/nginx -s reload
    /home/xtreamcodes/iptv_xtream_codes/wwwdir/
    /home/xtreamcodes/iptv_xtream_codes/php/etc
    

    ffmpeg command used to streaming video

    /home/xtreamcodes/iptv_xtream_codes/bin/ffmpeg -y -nostdin -hide_banner -loglevel warning -err_detect ignore_err -user-agent Xtream-Codes IPTV Panel Pro -nofix_dts -start_at_zero -copyts -vsync 0 -correct_ts_overflow 0 -avoid_negative_ts disabled -max_interleave_delta 0 -probesize 5000000 -analyzeduration 5000000 -progress http://127.0.0.1:9000/progress.php?stream_id=124 -i http://da1981.xyz:8080/tJvIus0CY5/XwRjNIiKZM/8583 -vcodec copy -scodec copy -acodec copy -individual_header_trailer 0 -f segment -segment_format mpegts -segment_time 10 -segment_list_size 6 -segment_format_options mpegts_flags=+initial_discontinuity:mpegts_copyts=1 -segment_list_type m3u8 -segment_list_flags +live+delete -segment_list /home/xtreamcodes/iptv_xtream_codes/streams/124_.m3u8 /home/xtreamcodes/iptv_xtream_codes/streams/124_%d.ts
    
  • How to extract RAR file in Linux

    unar is a utility to extract rar archive files.

    Using unrar

    To extract a rar file, run

    unrar x filename.rar
    

    Install unare on Ubuntu/Debian

    To install unrar on Ubuntu/Debian, run

    apt install unrar -y
    

    Install Unrar from source

    Download unrar from

    http://www.rarlab.com/download.htm

    cd /usr/local/src
    wget https://www.rarlab.com/rar/rarlinux-x64-6.0.b1.tar.gz
    tar zxvf rarlinux-x64-6.0.b1.tar.gz
    cd rar
    cp rar unrar /usr/bin
    
  • Start meguca on boot

    Meguca is an open source annonymouse imageboard written in Go Lang and Node.js

    https://github.com/bakape/meguca

    To start Meguca on boot, you can use monit

    Create a monitoring script with

    check process meguca with pidfile /meguca/.pid
      start program = "/bin/su -c 'cd /meguca; ./meguca start' meguca"
      stop program  = "/bin/su -c 'cd /meguca; ./meguca stop' meguca"
    
    if failed port 8000 protocol HTTP
      request /api/health-check
      with timeout 10 seconds
      then restart
    

    /meduca is where Meguca is installed.

    Create a config.json file with

    vi /meguca/config.json 
    

    Add following content

    {
    	"ssl": false,
    	"reverseProxied": true,
    	"gzip": false,
    	"imagerMode": 0,
    	"cacheSize": 128,
    	"address": "127.0.0.1:8000",
    	"database": "user=meguca password=meguca dbname=meguca sslmode=disable",
    	"certPath": "",
    	"reverseProxyIP": ""
    }
    

    reverseProxied is set to true because my installation is behind reverse proxy (CloudFlare).

    Make changes to config settings as requires.

    Create a user

    useradd -m -s /bin/bash meguca
    

    Change ownership of meguca files

    chown -R meguca:meguca /meguca