Category: Virtualization

  • How to Delete a Virtual Machine in Proxmox

    Here is what you need to do to delete a VM in proxmox.

    1) log in to Proxmox.

    2) Find the VM you need to delete.

    3) Shutdown the VM

    Shutdown proxmox virtual machine

    When it asks for confirmation, press “Yes”

    4) After VM is stopped, click on the “More” button. You will see the option to remove the Virtual machine.

    Remove Virtual Machine in Proxmox

    It will ask for deletion confirmation. You must enter VM ID and click the “Remove” button to delete the virtual machine.

    Delete a VM using command line

    First, stop the Virtual machine

    qm stop VM_ID
    

    Then delete the VM with

    qm destroy VM_ID
    

    Back to Proxmox

  • How to delete KVM Virtual machine using virsh

    How to delete KVM Virtual machine using virsh

    Deleting a KVM virtual machine using “virsh” is a multi-step process. You need to stop the virtual machine, find the storage device used by them VM, and remove the storage devices. Then undefine the virtual machine.

    Here are the commands used to delete a KVM VM with the name win10.

    First shutdown the VM

    virsh shutdown win10
    

    If it did not stop, you can force stop with the command

    virsh destroy win10
    

    Find information about the VM with the command “virsh dumpxml –domain VM_NAME”

    root@mail:~# virsh dumpxml --domain win10
    
      win10
      e40399f7-9936-41ce-9a70-0251cb948cae
      
        
          
        
      
      16777216
      16777216
      4
      
        hvm
        
      
      
        
        
        
          
          
          
        
        
      
      
      
        
        
        
        
      
      destroy
      restart
      destroy
      
        
        
      
      
        /usr/bin/qemu-system-x86_64
        
          
          
          
          

    In the above result, you can see we are using the following storage device for this VM

        
          
          
          
          

    This VM uses the storage device /dev/vg1/win10, we need to remove it.

    lvremove /dev/vg1/win10
    

    The VM also uses ISO file /var/lib/libvirt/images/Win10_21H2_English_x64.iso, if you don’t need it, you can delete it.

    rm -f /var/lib/libvirt/images/Win10_21H2_English_x64.iso
    

    To delete the VM, you can use

    virsh undefine win10
    

    Example

    root@mail:~# virsh undefine win10
    Domain win10 has been undefined
    
    root@mail:~# 
    

    Back to virsh

  • How to force shutdown a KVM VM with virsh

    How to force shutdown a KVM VM with virsh

    I have a KVM virtual machine running Windows, and when I try to shutdown, it never stopped.

    root@sok:~# virsh list
     Id   Name       State
    --------------------------
     1    iredmail   running
     3    win10      running
    
    root@sok:~# virsh shutdown win10
    Domain win10 is being shutdown
    
    root@sok:~# virsh list
     Id   Name       State
    --------------------------
     1    iredmail   running
     3    win10      running
    
    root@sok:~# 
    

    To force shutdown a KVM virtual machine using virsh, you can use the command

    virsh destroy VM_NAME
    

    Example

    root@sok:~# virsh destroy win10
    Domain win10 destroyed
    
    root@sok:~# virsh list 
     Id   Name       State
    --------------------------
     1    iredmail   running
    
    root@sok:~# virsh start win10
    Domain win10 started
    
    root@sok:~# virsh list
     Id   Name       State
    --------------------------
     1    iredmail   running
     4    win10      running
    
    root@sok:~#
    

    Back to virsh

  • Vultr

    How to Reinstall Operating System on Vultr VPS

  • Configure KVM Bridge Network using netplan

    Configure KVM Bridge Network using netplan

    On Ubuntu 20.04 server we have the following network configuration

    root@mail:~# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
        link/ether d8:cb:8a:e3:c7:c9 brd ff:ff:ff:ff:ff:ff
        inet 37.157.249.137/32 scope global enp2s0
           valid_lft forever preferred_lft forever
        inet6 fe80::dacb:8aff:fee3:c7c9/64 scope link 
           valid_lft forever preferred_lft forever
    root@mail:~# 

    “netplan get all” command show following

    netplan get all

    netplan config file has the following

    root@mail:~# cat /etc/netplan/01-netcfg.yaml 
    # This file describes the network interfaces available on your system
    # For more information, see netplan(5).
    network:
      version: 2
      renderer: networkd
      ethernets:
        enp2s0:
          addresses: [ 37.157.249.137/32 ]
          nameservers:
              search: [ venus.dedi.server-hosting.expert ]
              addresses:
                  - "8.8.8.8"
                  - "1.1.1.1"
          routes:
          - to: 0.0.0.0/0
            via: 37.157.249.129
            on-link: true
    
    root@mail:~# 

    Before you can use bridge in netplan, you need to install bridge-utils

    apt-get install bridge-utils

    To configure bridge networking, modify the file as follows

    network:
        version: 2
        renderer: networkd
        ethernets:
            enp2s0:
                dhcp4: no
                dhcp6: no
        bridges:
            br0:
                dhcp4: no
                dhcp6: no
                interfaces: [enp2s0]
                addresses: [ 37.157.249.137/32 ]
                nameservers:
                    addresses:
                        - "8.8.8.8"
                        - "1.1.1.1"
                routes:
                -   to: 0.0.0.0/0
                    via: 37.157.249.129
                    on-link: true
    

    To check if there is any error in netplan configuration, run

    netplan generate

    To test the configuration, run

    netplan try

    Here is an example https://gist.github.com/serverok/712b85432d188f16c9d32e44455b419a

    Back to netplan

  • Proxmox LXC container docker not working

    Proxmox LXC container docker not working

    On a Proxmox server, LXC container failed to run docker. When I start a docker container, I get the following errors.

    root@erpdo:~# docker run hello-world
    Unable to find image 'hello-world:latest' locally
    latest: Pulling from library/hello-world
    2db29710123e: Pull complete 
    Digest: sha256:2498fce14358aa50ead0cc6c19990fc6ff866ce72aeb5546e1d59caac3d0d60f
    Status: Downloaded newer image for hello-world:latest
    docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "proc" to rootfs at "/proc" caused: mount through procfd: permission denied: unknown.
    root@erpdo:~#
    

    To fix the error, in proxmox, click on the container, then go to Options.

    proxmox container options

    Proxmox > Container Name > Options > Features
    

    Click on Features, then click edit. You will see a popup like

    proxmox edit container features

    On this screen, enable the following 2 options

    keyctl
    Nesting
    

    Stop and start the container. After this, the docker container will work inside the LXC container in proxmox server.

  • Migrate physical server to LXD container

    Migrate physical server to LXD container

    lxd-p2c is developed by LXD team to migrate physical servers into LXD containers.

    https://github.com/lxc/lxd/tree/master/lxd-p2c

    Static binary files are available in Github. To Download, go to

    https://github.com/lxc/lxd/actions

    Click on any of the actions with a green tick mark. Under Artifacts, you will see the download link for various operating systems.

    lxd-p2c download

    Download the Linux.zip file, extract, copy the “lxd-p2c” file to the physical server that you need to convert to LXD container.

    Before migrating

    You need to stop all services like web server, MySQL server, etc before you run lxd-p2c migration.

    Running lxd-p2c

    Before you can run the migration, you need to create an LXD container with the same OS as the physical server.

    Then run the following command on the physical server to migrate files to the LXD container you created.

    ./lxd-p2c https://lxdserver:8443 NEW_CONTAINTER_NAME /
    

    It is will ask for LXD password, if you don’t have it, use the following command to reset the password on the LXD server.

    lxc config set core.trust_password PASSOWORD_HERE
    
    
  • Static IP for CentOS LXC container

    Static IP for CentOS LXC container

    LXC containers get dynamic IP from DHCP. When you stop and start a container, its IP gets changed. If you hosting some web application on this container, you need to point the application to new IP. To avoid this, you can configure static IP on the container.

    LXC containers get IP in the range 10.0.3.2-255. To make CentOS container IP static, edit file

    vi /etc/sysconfig/network-scripts/ifcfg-eth0 
    

    Find

    BOOTPROTO=dhcp
    

    Replace with

    BOOTPROTO=STATIC
    

    Add below

    IPADDR=10.0.3.2
    GATEWAY=10.0.3.1
    DNS1=1.1.1.1
    DNS1=8.8.8.8
    

    10.0.3.2 = replace with any unused IP in the range your LXC container assign using DHCP.

    Create a static route file

    vi /etc/sysconfig/network-scripts/route-eth0
    

    Add

    10.0.3.1 dev eth0
    default via 10.0.3.1 dev eth0
    

    After restarting the LXC container, you will have a fixed IP.

    reboot
    
  • Where LXC Container files are stored?

    Where LXC Container files are stored?

    LXC containers are stored in folder /var/lib/lxc

    lxd directory

    Each container have a folder, which contains

    /var/lib/lxc/VM_NAME_HERE/config = configuration file
    /var/lib/lxc/VM_NAME_HERE/rootfs = file system used by lxc container.
    

    LXC container OS templates are stored in

    /usr/share/lxc/templates
    

    See LXC

  • LXC container networking not working

    LXC container networking not working

    On a Debian server, I installed lxc, but when I create a container, it is missing IP address. When I did “lxc-attack VM_NAME”, and checked the network interface with “ip link” command, i can only see the loopback interface “lo”.

    On Host machine, when I checked network interfaces, lxcbr0 was missing. To fix this, edit file

    vi /etc/default/lxc 
    

    Find

    USE_LXC_BRIDGE="false"
    

    Replace with

    USE_LXC_BRIDGE="true"
    

    Now restart lxc-net service

    systemctl restart lxc-net
    

    At this point, you will see the network interface “lxcbr0”.

    root@b24:~# brctl show
    bridge name	bridge id		STP enabled	interfaces
    br-52702762660a		8000.024201845e4b	no		
    docker0		8000.0242ee9122d8	no		
    lxcbr0		8000.00163e000000	no		vethDED0EK
    lxdbr0		8000.00163e7d81a2	no		
    root@b24:~#
    

    Next edit file

    vi /etc/lxc/default.conf
    

    I had the following content in this file

    root@b24:/etc/lxc# cat default.conf
    lxc.net.0.type = empty
    lxc.apparmor.profile = generated
    lxc.apparmor.allow_nesting = 1
    root@b24:/etc/lxc#

    Find

    lxc.net.0.type = empty
    

    Replace with

    lxc.net.0.type = veth
    lxc.net.0.link = lxcbr0
    lxc.net.0.flags = up
    

    After this is done, newly created LXC containers get IP addresses.

    root@b24:~# lxc-ls -f
    NAME STATE   AUTOSTART GROUPS IPV4       IPV6 UNPRIVILEGED 
    vm-1 RUNNING 0         -      10.0.3.128 -    false        
    root@b24:~# 
    

    See LXC

  • lxc storage list

    To list Storage in LXD, use the command

    lxc storage list

    Example

    root@b24:~# lxc storage list
    +---------+--------+--------------------------------------------+-------------+---------+
    |  NAME   | DRIVER |                   SOURCE                   | DESCRIPTION | USED BY |
    +---------+--------+--------------------------------------------+-------------+---------+
    | default | btrfs  | /var/snap/lxd/common/lxd/disks/default.img |             | 7       |
    +---------+--------+--------------------------------------------+-------------+---------+
    root@b24:~# 

    In this installation, we use btrfs file system for storing containers. The file location is /var/snap/lxd/common/lxd/disks/default.img. It is used by 7 containers.

    See LXD

  • How to know you are inside a LXD container?

    How to know you are inside a LXD container?

    You have access to a virtual machine, you need to find out what virtualization technology it uses, you can install virt-what

    apt instal virt-what
    

    Then run virt-what command, it will show what virtualization technology you are using.

    root@mysql-1:~# virt-what
    lxc
    root@mysql-1:~#
    

    Another way to find is using the command. If you use LXD container, you will see the following files/directories.

    root@mysql-1:~# ls -la /dev | grep lx
    -r--r--r-- 1 root   root          37 Jun 30 21:15 .lxc-boot-id
    drwx--x--x 2 nobody nogroup       40 Jun 30 21:15 .lxd-mounts
    drwxr-xr-x 2 nobody nogroup       60 Jul 12 17:13 lxd
    root@mysql-1:~# 
    

    Also mount command will show many lxcfs mounts.

    Identify LXD container

    See LXD