Tag: pvresize

  • Resize EC2 file system with lvm

    Resize EC2 file system with lvm

    On an EC2 server with an LVM file system, I need to increase the size of the / partition. First, increase the size of the volume in the Amazon AWS console as per Resize Amazon EC2 Boot Disk, once volume size is increased, you need to resize your filesystem.

    [root@sok ~]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    devtmpfs             3.8G     0  3.8G   0% /dev
    tmpfs                3.8G     0  3.8G   0% /dev/shm
    tmpfs                3.8G   17M  3.8G   1% /run
    tmpfs                3.8G     0  3.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root   38G   33G  5.1G  87% /
    /dev/nvme0n1p1      1014M  412M  603M  41% /boot
    tmpfs                775M     0  775M   0% /run/user/0
    [root@sok ~]# 
    

    Here is result of parted -l

    [root@sok ~]# parted -l
    Model: Linux device-mapper (linear) (dm)
    Disk /dev/mapper/cl-swap: 1074MB
    Sector size (logical/physical): 512B/512B
    Partition Table: loop
    Disk Flags: 
    
    Number  Start  End     Size    File system     Flags
     1      0.00B  1074MB  1074MB  linux-swap(v1)
    
    
    Model: Linux device-mapper (linear) (dm)
    Disk /dev/mapper/cl-root: 40.7GB
    Sector size (logical/physical): 512B/512B
    Partition Table: loop
    Disk Flags: 
    
    Number  Start  End     Size    File system  Flags
     1      0.00B  40.7GB  40.7GB  xfs
    
    
    Model: NVMe Device (nvme)
    Disk /dev/nvme0n1: 172GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags: 
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  1075MB  1074MB  primary  xfs          boot
     2      1075MB  10.7GB  9663MB  primary               lvm
     3      10.7GB  16.1GB  5369MB  primary               lvm
     4      16.1GB  42.9GB  26.8GB  primary               lvm
    
    
    [root@sok ~]# 
    

    /dev/nvme0n1 was resized to 172 GB, but the 4th partition was only using 26 GB.

    Resize the 4th partition with the command “growpart /dev/nvme0n1 4”

    [root@sok ~]# growpart /dev/nvme0n1 4
    CHANGED: partition=4 start=31457280 old: size=52428800 end=83886080 new: size=304087007 end=335544287
    [root@sok ~]# parted /dev/nvme0n1 print
    Model: NVMe Device (nvme)
    Disk /dev/nvme0n1: 172GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags: 
    
    Number  Start   End     Size    Type     File system  Flags
     1      1049kB  1075MB  1074MB  primary  xfs          boot
     2      1075MB  10.7GB  9663MB  primary               lvm
     3      10.7GB  16.1GB  5369MB  primary               lvm
     4      16.1GB  172GB   156GB   primary               lvm
    
    [root@sok ~]#
    

    Resize physical volume with “pvresize /dev/nvme0n1p4” command.

    [root@sok ~]# pvresize /dev/nvme0n1p4
      Physical volume "/dev/nvme0n1p4" changed
      1 physical volume(s) resized or updated / 0 physical volume(s) not resized
    [root@sok ~]#
    

    Finally resize logical volume with the command “lvextend -r -l +100%FREE /dev/mapper/cl-root”. -r option resize the file system also.

    [root@sok ~]# lvextend -r -l +100%FREE /dev/mapper/cl-root
      Size of logical volume cl/root changed from 37.90 GiB (9703 extents) to <157.99 GiB (40445 extents).
      Logical volume cl/root successfully resized.
    meta-data=/dev/mapper/cl-root    isize=512    agcount=19, agsize=524032 blks
             =                       sectsz=512   attr=2, projid32bit=1
             =                       crc=1        finobt=0 spinodes=0
    data     =                       bsize=4096   blocks=9935872, imaxpct=25
             =                       sunit=0      swidth=0 blks
    naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
    log      =internal               bsize=4096   blocks=2560, version=2
             =                       sectsz=512   sunit=0 blks, lazy-count=1
    realtime =none                   extsz=4096   blocks=0, rtextents=0
    data blocks changed from 9935872 to 41415680
    [root@sok ~]# 
    

    Now the size of the / portion is increased to use full available disk space.

    [root@sok ~]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    devtmpfs             3.8G     0  3.8G   0% /dev
    tmpfs                3.8G     0  3.8G   0% /dev/shm
    tmpfs                3.8G   17M  3.8G   1% /run
    tmpfs                3.8G     0  3.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root  158G   33G  126G  21% /
    /dev/nvme0n1p1      1014M  412M  603M  41% /boot
    tmpfs                775M     0  775M   0% /run/user/0
    [root@sok ~]#