It's been a while that I was using 8G of space for my little ec2 instance. Today my dad called that his application stopped working and when i logged in I have noticed that the space on EC2 was full. At first instance I have cleared some space and then I thought that it is a time to add more space to the machine.
Adding space was not tough. But there are a few steps. Altough the documentation is clear on AWS but for my reference below was done.
1. Log on to the AWS console and increased teh size to 20G.
2. df -h still showing 8G (Magic will not happen we need to do some work)
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 210M 0 210M 0% /dev
tmpfs 48M 612K 48M 2% /run
/dev/xvda1 7.8G 5.8G 1.6G 80% /
tmpfs 240M 0 240M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 240M 0 240M 0% /sys/fs/cgroup
tmpfs 48M 0 48M 0% /run/user/1000
$
3. Run lsblk to see how the disks are attached and what are the partitions
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 20G 0 disk
└─xvda1 202:1 0 8G 0 part /
$
I need to make xvda1 20G
4. Grow partition using
$ sudo growpart /dev/xvda 1
CHANGED: partition=1 start=16065 old: size=16761118 end=16777183 new: size=41926942,end=41943007
$
It grew partition number 1 which is under min to it's full size
5. Note that it is still showing 8G under df -h
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 210M 0 210M 0% /dev
tmpfs 48M 612K 48M 2% /run
/dev/xvda1 7.8G 5.8G 1.6G 80% /
tmpfs 240M 0 240M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 240M 0 240M 0% /sys/fs/cgroup
tmpfs 48M 0 48M 0% /run/user/1000
$
6. Just resize the filesystem on partition
$ sudo resize2fs /dev/xvda1
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/xvda1 is now 5240867 (4k) blocks long.
$
7. Issue the df -h command and see the size is updated.
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 210M 0 210M 0% /dev
tmpfs 48M 612K 48M 2% /run
/dev/xvda1 20G 5.8G 13G 32% /
tmpfs 240M 0 240M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 240M 0 240M 0% /sys/fs/cgroup
tmpfs 48M 0 48M 0% /run/user/1000
$
Now I will start paying more money to AWS and have less outages :-) . When you are expanding the partition you can expand it to your need rather that expanding it to full and use the space for something else if you like.
Now my dad will be happy for long time and I do not have to worry regarding sapce for some time.