To display the effect of rule set changes, use
nft list ruleset
Flush rules
nft flush ruleset
To display the effect of rule set changes, use
nft list ruleset
Flush rules
nft flush ruleset
To set unlimited resources for a user in cloudlinux, run the following command:
lvectl set-user user --unlimited
Back to cloudlinux
Login to ubuntu server using SSH and run the following command to install MATE desktop environment
apt install ubuntu-mate-desktop
Create a normal user. We will use this user to login to XRDP desktop.
useradd -m -s /bin/bash desktop
You can change user name “desktop” to any other username you need. Make the user super user
usermod -aG sudo desktop
Switch to desktop user
su - desktop
Create .xsession file
echo "mate-session" > ~/.xsession
Exit back to user root
exit
Install xrdp package
apt install xrdp
Create file
vi /etc/polkit-1/localauthority/50-local.d/45-allow-colord.pkla
with following content
[Allow Colord all Users] Identity=unix-user:* Action=org.freedesktop.color-manager.create-device;org.freedesktop.color-manager.create-profile;org.freedesktop.color-manager.delete-device;org.freedesktop.color-manager.delete-profile;org.freedesktop.color-manager.modify-device;org.freedesktop.color-manager.modify-profile ResultAny=no ResultInactive=no ResultActive=yes
Restart xrdp
systemctl restart xrdp
Now you should be able to connect to remote ubuntu server using RDP.
back to xrdp
I have a LVM partition that uses 3 physical drives. I need to increase size of this LVM partition by adding a new disk.
I have a drive with 452 GB free disk space.
[root@sok ~]# parted /dev/nvme0n1 print Model: NVMe Device (nvme) Disk /dev/nvme0n1: 512GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 4296MB 4295MB primary linux-swap(v1) 2 4296MB 6443MB 2147MB primary ext3 3 6443MB 60.1GB 53.7GB primary ext4 4 60.1GB 512GB 452GB primary [root@sok ~]#
Create Physical volume with pvcreate comand
[root@sok ~]# pvcreate /dev/nvme0n1p4 Physical volume "/dev/nvme0n1p4" successfully created. [root@sok ~]#
Current volume group size is 4.19 TB
[root@sok ~]# vgs VG #PV #LV #SN Attr VSize VFree vg1 3 1 0 wz--n- 4.19t 0 [root@sok ~]#
Lets extend the volume group vg1 by adding the newly created physical volume
[root@sok ~]# vgextend vg1 /dev/nvme0n1p4 Volume group "vg1" successfully extended [root@sok ~]#
Now the volume group size is 4.6 TB
[root@sok ~]# vgs VG #PV #LV #SN Attr VSize VFree vg1 4 1 0 wz--n- 4.60t <420.94g [root@sok ~]#
Lets find details about the current logical volumes
[root@sok ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data1 vg1 -wi-a----- 4.19t [root@sok ~]#
We have one logical volume with name data1, size is 4.9 TB.
To extend the logical volume, rune the command
[root@sok ~]# lvextend -l +100%FREE /dev/vg1/data1 Size of logical volume vg1/data1 changed from 4.19 TiB (1098852 extents) to 4.60 TiB (1206612 extents). Logical volume vg1/data1 successfully resized. [root@sok ~]#
This will extend logical volume "data1" to use all available free disk space on the volume group vg1.
Lets check the new size of LVM with lvs command
[root@sok ~]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data1 vg1 -wi-a----- 4.60t [root@sok ~]#
Now LVM size is increased to 4.6 TB.
Let us find more info on the logical volume using blkid command
[root@sok ~]# blkid | grep data1 /dev/mapper/vg1-data1: UUID="55a38b6b-a0a7-48a2-b314-36b1f0ce2f05" BLOCK_SIZE="512" TYPE="xfs" [root@sok ~]#
The volume is formatted as xfs file system.
Let us find out the mount point.
[root@sok ~]# df -h | grep data1 /dev/mapper/vg1-data1 4.2T 1.1T 3.2T 25% /usr/share/nginx/html [root@sok ~]#
The volume is mounted at /usr/share/nginx/html
To resize the xfs filesystem, run the command
[root@sok ~]# xfs_growfs /usr/share/nginx/html meta-data=/dev/mapper/vg1-data1 isize=512 agcount=5, agsize=268435455 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 bigtime=0 inobtcount=0 data = bsize=4096 blocks=1125224448, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 1125224448 to 1235570688 [root@sok ~]#
If the file system is not mounted, you need to mount it before resizing.
Let us verify the size
[root@sok ~]# df -h | grep data1 /dev/mapper/vg1-data1 4.7T 1.1T 3.6T 23% /usr/share/nginx/html [root@sok ~]#
Size of the volume now changed from 4.2T to 4.7T.
Back to lvm
On a Linux server running AlmaLinux 8, i get following error on /var/log/messages
Oct 17 05:33:17 Alma-88-amd64-base kernel: pcieport 0000:00:01.1: AER: Corrected error received: 0000:01:00.0 Oct 17 05:33:17 Alma-88-amd64-base kernel: nvme 0000:01:00.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, (Receiver ID) Oct 17 05:33:17 Alma-88-amd64-base kernel: nvme 0000:01:00.0: device [144d:a80a] error status/mask=00000001/0000e000 Oct 17 05:33:17 Alma-88-amd64-base kernel: nvme 0000:01:00.0: [ 0] RxErr
To fix the error, edit file
vi /etc/default/grub
Find line
GRUB_CMDLINE_LINUX="biosdevname=0 crashkernel=auto rd.auto=1 consoleblank=0"
Replace with
GRUB_CMDLINE_LINUX="biosdevname=0 crashkernel=auto rd.auto=1 consoleblank=0 pcie_aspm=off"
We added pcie_aspm=off, this disable PCIe ASPM (Active State Power Management).
After saving your changes and exiting the text editor, you’ll need to update GRUB for the changes to take effect. You can do this with the following command
grub2-mkconfig -o /boot/grub2/grub.cfg
Reboot the server
reboot
Back to Errors
On a cPanel server with RAID 1, the quota was disabled on the / file system even after it was enabled in /etc/fstab
[root@server48 ~]# mount | grep "/ " /dev/md3 on / type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota) [root@server48 ~]#
It says noquota.
xfs_quota -x -c 'report -h'
Returned empty result.
In /etc/fstab, the quota was enabled properly
UUID=30c3c244-1e7c-4cc5-babd-d6cf8eee8ac5 / xfs defaults,uquota,usrquota 0 1
/etc/default/grub has the following GRUB_CMDLINE_LINUX line
GRUB_CMDLINE_LINUX="crashkernel=auto rd.auto nomodeset rootflags=uquota"
The problem was caused due to grub.cfg read from the wrong drive.
[root@server48 ~]# grep boot /etc/fstab UUID=f2edd8de-2161-482b-b27f-9d399eed1abe /boot xfs defaults 0 0 /dev/nvme1n1p1 /boot/efi vfat defaults 0 1 [root@server48 ~]#
The server is using EFI and /dev/nvme1n1p1 is mounted as /boot/efi, Since we are using RAID 1, we have 2 EFI partitions
[root@server48 ~]# blkid | grep EFI /dev/nvme0n1p1: SEC_TYPE="msdos" LABEL="EFI_SYSPART" UUID="6B1C-67AC" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="primary" PARTUUID="a26b50cd-357f-4a93-b0c7-41f89bd4e038" /dev/nvme1n1p1: SEC_TYPE="msdos" LABEL="EFI_SYSPART" UUID="6B59-E668" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="primary" PARTUUID="b24738fb-f7fd-4354-a35a-af9f3fac9b77" [root@server48 ~]#
In /etc/fstab, we are using /dev/nvme1n1p1, when server is booting, it is using the other EFI partition.
Update grub on the other EFI partition. Update /etc/fstab to use the partition as /boot/efi
mount /dev/nvme0n1p1 /mnt/
Regenerate grub.cfg in the 2nd EFI partition that is mounted as /mnt/
cp /mnt/EFI/almalinux/grub.cfg{,.backup} grub2-mkconfig -o /mnt/EFI/almalinux/grub.cfg
Edit /etc/fstab
vi /etc/fstab
Find
/dev/nvme1n1p1 /boot/efi vfat defaults 0 1
Replace with
/dev/nvme0n1p1 /boot/efi vfat defaults 0 1
Reboot the server
reboot
After reboot, the mount command shows quota enabled for / partition.
Back to quota
I had copied a PrestaShop site to another domain. Both domains are on same server, using same Memcached server for caching. When i visit one of the site, resources like css/js files are loading from other website. To fix this, i configured 2 Memcached servers running on 2 different ports, each site use its own Memcached instance.
On Ubuntu server, i installed memcached and supervisor with
apt install -y memcached memcached
Create file
vi /etc/supervisor/conf.d/memcached.conf
Inside add
[program:memcached2] priority=200 command=/usr/bin/memcached -m 64 -p 11212 -u memcache -l 127.0.0.1 -P /var/run/memcached/memcached2.pid user=memcache autorestart=true autostart=true redirect_stderr=true
By default Memcached runs on port 11211, above configuration will make an instance of Memcached that runs on port 11212.
If you want to create another instance of memcached, duplicate above lines, in the lines, change
[program:memcached2]
with
[program:memcached3]
Find
command=/usr/bin/memcached -m 64 -p 11212 -u memcache -l 127.0.0.1 -P /var/run/memcached/memcached2.pid
Replace port with different port, say 11213 and /var/run/memcached/memcached2.pid with /var/run/memcached/memcached2.pid
Here is the complete configuration with 2 Memcached instances running under supervisiord
[program:memcached2] priority=200 command=/usr/bin/memcached -m 64 -p 11212 -u memcache -l 127.0.0.1 -P /var/run/memcached/memcached2.pid user=memcache autorestart=true autostart=true redirect_stderr=true [program:memcached3] priority=200 command=/usr/bin/memcached -m 64 -p 11213 -u memcache -l 127.0.0.1 -P /var/run/memcached/memcached3.pid user=memcache autorestart=true autostart=true redirect_stderr=true
Reload supervisor
supervisorctl reload
Check status
supervisorctl status
Here is an example with 4 memcached instances running
root@zen-keldysh:/etc/supervisor/conf.d# supervisorctl reload Restarted supervisord root@zen-keldysh:/etc/supervisor/conf.d# supervisorctl status memcached_erikoisrahti RUNNING pid 337864, uptime 0:00:02 memcached_longdrink24 RUNNING pid 337865, uptime 0:00:02 memcached_tulivesi RUNNING pid 337866, uptime 0:00:02 memcached_viskit RUNNING pid 337867, uptime 0:00:02 root@zen-keldysh:/etc/supervisor/conf.d#
Here are the configuration file for above 4 Memcached instance setup
root@server1:~# cat /etc/supervisor/conf.d/memcached.conf [program:memcached_longdrink24] priority=200 command=/usr/bin/memcached -m 64 -p 11212 -u memcache -l 127.0.0.1 -P /var/run/memcached/memcached2.pid user=memcache autorestart=true autostart=true redirect_stderr=true [program:memcached_viskit] priority=200 command=/usr/bin/memcached -m 64 -p 11213 -u memcache -l 127.0.0.1 -P /var/run/memcached/memcached-viskit.pid user=memcache autorestart=true autostart=true redirect_stderr=true [program:memcached_erikoisrahti] priority=200 command=/usr/bin/memcached -m 64 -p 11214 -u memcache -l 127.0.0.1 -P /var/run/memcached/memcached-erikoisrahti.pid user=memcache autorestart=true autostart=true redirect_stderr=true [program:memcached_tulivesi] priority=200 command=/usr/bin/memcached -m 64 -p 11215 -u memcache -l 127.0.0.1 -P /var/run/memcached/memcached-tulivesi.pid user=memcache autorestart=true autostart=true redirect_stderr=true root@server1:~#
When login to an Ubuntu 22.04 server using FileZilla SFTP, got login failed error.
Status: Connecting to 51.38.246.115:3333... Response: fzSftp started, protocol_version=9 Command: keyfile "/home/boby/.ssh/id_rsa" Command: open "[email protected]" 3333 Command: Trust new Hostkey: Once Command: Pass: Error: Authentication failed. Error: Critical error: Could not connect to server Status: Disconnected from server
On checking /var/log/auth.log, found the following error message.
sshd[8916]: userauth_pubkey: key type ssh-rsa not in PubkeyAcceptedAlgorithms [preauth] sshd[8916]: Connection closed by authenticating user root MY_IP_ADDR port 56559 [preauth]
The error “ssh-rsa not in PubkeyAcceptedAlgorithms” happens when trying to connect to a server which only support more secure algorithm, such as SHA-256 or better.
To fix the error, edit file
vi /etc/ssh/sshd_config
At the end of the file, add
PubkeyAcceptedAlgorithms +ssh-rsa
Restart sshd
systemctl restart sshd
To view currently supported Algorithms, run
sshd -T | grep -i pubkeyacceptedkeytypes
You can use pubkeyacceptedkeytypes instead of PubkeyAcceptedAlgorithms in /etc/ssh/sshd_config
Back to SSH
Upgrade CentOS 7 to latest version with
yum update -y
Reboot the server
reboot
Install elevate repo rpm file
yum install -y http://repo.almalinux.org/elevate/elevate-release-latest-el$(rpm --eval %rhel).noarch.rpm
Install leapp
yum install -y leapp-upgrade leapp-data-almalinux
Run pre upgrade check
leapp preupgrade
After the preupgrade script run, it will generate a log file /var/log/leapp/leapp-report.txt, you need to fix any problems reported in this file.
On CentOS 7, you need to run these 2 commands as they will cause upgrade blocking.
rmmod pata_acpi leapp answer --section remove_pam_pkcs11_module_check.confirm=True
Run the upgrade
leapp upgrade
During the upgrade, server will reboot itself. This process can take a while to finish. Don’t interrupt the process or you will end up with non working server.
Once the upgrade process is finished, reboot the server.
reboot
When running chrome, i get error message
/home/serverok/node_modules/puppeteer/.local-chromium/linux-641577/chrome-linux/chrome: error while loading shared libraries: libXss.so.1: cannot open shared object file: No such file or directory
To fix, install libxss package.
On Ubuntu/Debian
apt install -y libxss1
Back to Errors
chrony is a network time protocol (NTP) implementation. It can be used to keep server time sync with NTP servers.
To install chrony on RHEL based OS, run
yum install chrony
On Debian/Ubuntu, run
apt install chrony
On Ubuntu, by default chrony is configured to use ubuntu NTP servers.
pool ntp.ubuntu.com iburst maxsources 4 pool 0.ubuntu.pool.ntp.org iburst maxsources 1 pool 1.ubuntu.pool.ntp.org iburst maxsources 1 pool 2.ubuntu.pool.ntp.org iburst maxsources 2
If you need to change NTP servers, you can edit configuration file
vi /etc/chrony/chrony.conf
Restart chrony
systemctl restart chrony
Enable chrony
systemctl enable chrony
Start chrony
systemctl start chrony
To view status, run
systemctl status chrony
Display system time information, you can use the command “chronyc tracking”.
root@ip-172-26-14-120:~# chronyc tracking Reference ID : E9485C92 (prod-ntp-4.ntp4.ps5.canonical.com) Stratum : 3 Ref time (UTC) : Sat Jun 10 08:15:27 2023 System time : 0.000015639 seconds slow of NTP time Last offset : -0.000025658 seconds RMS offset : 0.000170312 seconds Frequency : 4.834 ppm fast Residual freq : -0.007 ppm Skew : 0.255 ppm Root delay : 0.008501955 seconds Root dispersion : 0.001060591 seconds Update interval : 260.2 seconds Leap status : Normal root@ip-172-26-14-120:~#
Back to Time
systemd-timesyncd is a system service in Linux operating systems that provides time synchronisation with NTP servers. It continuously adjusts the system clock to ensure accurate timekeeping, which is crucial for various system operations, time-sensitive applications, and network synchronisation.
To configure systemd-timesyncd, edit file
vi /etc/systemd/timesyncd.conf
Add following
[Time] NTP= FallbackNTP=time.google.com
Leaving NTP= uncommented and assigned to an empty string resets the list of NTP servers, including any per-interface assignments. This prevents inadvertently moving between smeared and un-smeared time servers. Configuring Google Public NTP as the fallback server will cause it to be selected as the only NTP server.
If you want to use debian maintained NTP servers, use
[Time] NTP= FallbackNTP=0.debian.pool.ntp.org 1.debian.pool.ntp.org 2.debian.pool.ntp.org 3.debian.pool.ntp.org
Restart systemd-timesyncd
systemctl restart systemd-timesyncd.service
You can verify NTP server with command
timedatectl show-timesync | grep ServerName
Back to time