Resizing disk size for a Ganeti instance

This process entails two stages:

a) Adding Disk space to a Ganeti instance

Stop the machine using the command below:

root@master-node:~# gnt-instance stop cloud.renu.ac.ug

NB: cloud.renu.ac.ug is the name of the Ganeti instance to be re-sized

There after, run the command below.

Warning: Since this command usually takes long to complete, it’s strongly advised that you run it when working directly on the console of the machine, or in screen mode in case you’re working remotely.

root@master-node:~# gnt-instance grow-disk cloud.renu.ac.ug 0 50g

This command will grow the disk 0 (first disk) of the ganeti instance called cloud.renu.ac.ug by 50GB, meaning the disk size will be increased by 50GB from its initial, that is to say, from 200GB to 250GB.

Please be patient till it ends, sample waiting period below

root@master-node:~# gnt-instance grow-disk cloud.renu.ac.ug 0 50g
11:18:14 2020 Growing disk 0 of instance ‘cloud.renu.ac.ug’ by 50.0G to 250.0G
11:18:17 2020  – INFO: Waiting for instance cloud.renu.ac.ug to sync disks
11:33:28 2020  – INFO: – device disk/0: 61.40% done, 9m 1s remaining (estimated)
11:34:29 2020  – INFO: – device disk/0: 65.90% done, 7m 26s remaining (estimated)
11:35:30 2020  – INFO: – device disk/0: 70.50% done, 6m 26s remaining (estimated)
11:36:30 2020  – INFO: – device disk/0: 74.50% done, 6m 10s remaining (estimated)
11:37:31 2020  – INFO: – device disk/0: 78.80% done, 4m 30s remaining (estimated)
11:38:32 2020  – INFO: – device disk/0: 83.30% done, 3m 42s remaining (estimated)
11:39:32 2020  – INFO: – device disk/0: 87.90% done, 2m 31s remaining (estimated)
11:40:33 2020  – INFO: – device disk/0: 92.40% done, 1m 38s remaining (estimated)
11:41:34 2020  – INFO: – device disk/0: 96.20% done, 53s remaining (estimated)
11:42:28 2020  – INFO: – device disk/0: 99.90% done, 2s remaining (estimated)
11:42:30 2020  – INFO: Instance cloud.renu.ac.ug’s disks are in sync

Next, start the vm and confirm that the space has been added. You can use the “lsblk” command to check the new disk size.

root@master-node:~# gnt-instance start cloud.renu.ac.ug

b) Making added space part of the file system

After the above process is completed successfully, we log into the machine (cloud.renu.ac.ug in this case) and proceed with making the extra space part of the file system.

NB: we shall be using LVM in combination with fdisk to achieve this, therefore, the disk partitions must be supporting LVM for this to happen.

To check if the current file system is LVM compatible, the following commands should all return some information, as illustrated below:

root@machine:~# pvs
root@machine:~# vgs
root@machine:~# lvs

From the results above, there is no free space available in Physical Volume (pvs)and Volume group (vgs). This implies that there’s no free space to use in re-sizing the disk partitions, unless we avail some to the system.

NB: Physical Volume is the storage available for use by LVM.

Therefore, to extend the size of the partitions, we need to add a new physical volume (PV) that will be made from the extra space we added to the disk in part (a). Once the PV is created successfully, we can then extend the volume group (VG) by adding the PV created to it. This will create enough free space to extend the Logical volume size.

NB: A volume group is a collection of physical volumes.

To create the physical volume, we shall use fdisk as detailed below:

root@machine:~# fdisk -cu /dev/sda

  1. To Create new partition Press n.
  2. Choose primary partition use p (or just press ENTER to use the default).
  3. Choose which number of partition to be selected to create the primary partition (or just press ENTER to use the default).
  4. Press 1 if any other disk available (or just press ENTER to use the default).
  5. Change the type using t.
  6. Type 8e to change the partition type to “Linux LVM”.
  7. Use p to print the create partition ( here we have not used the option).
  8. Press w to write the changes.

Once the above procedures have completed successfully, you can restart the machine for the changes to take effect, or you can simply run “partprobe” for the changes to effect without restarting the machine.

root@machine:~# partprobe

List and check the partition we have created using fdisk.

Next, create new PV (Physical Volume) using following command.

root@machine:~# pvcreate /dev/sda1

Verify that the  physical volume has been created successfully using the command below:

root@machine:~# pvs

The “pvs” command above now shows that we have free space available, therefore, we can proceed with adjusting the volume group (VG).

NB: We can only increase the size of any disk space by using the free space available in the volume group (VG), not the free space for the physical volume (PV). Thus, the space in the PV must be added to the VG.

Extending the volume group (VG)

Add this PV to “vg_tecmint” volume group to extend the size of a volume group to get more space for expanding the logical volume. Note that “vg_tecmint” is the name of my volume group in this scenario, therefore, you should replace “vg_tecmint ” with the name of your VG returned by the pvs command.

root@machine:~# vgextend vg_tecmint /dev/sda1

Let us check the size of a Volume Group now using:

root@machine:~# vgs

We can even see which physical volumes (PVs) are used to create a particular volume group with the pvscan command.

From the results of the pvscan command, we can see the the PV we added and it’s totally free. Let us see the  size of each logical volume, using lvdisplay command, we have currently before expanding any of them.

  1. LogVol00 defined for Swap.
  2. LogVol01 defined for /.
  3. Now we have 16.50 GB size for / (root).
  4. Currently there are 4226 Physical Extend (PE) available.

Resize the partition

With “vgs” command you can see how much free space is available. By  issuing “lvresize” command, you can either use exact amount to extend or  percentage. In my example I use 100%, but it could’ve been 441.63G.

Only use 100% of the FREE space if you don’t need space for restoring the HOME partition., otherwise, a lower percentage is recommended.

NB: It is important to include the minus (-) or plus (+) signs while resizing a logical volume. Otherwise, you’re setting a fixed size for the LV instead of resizing it.

[root@machine /]# lvresize -l +100%FREE centos/root
Size of logical volume centos/root changed from 50.00 GiB (12800 extents) to 491.63 GiB (125858 extents).
Logical volume root successfully resized.
[root@machine /]#
/* Confirm the size of your partitions */
[root@machine /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 500G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 499.5G 0 part
├─centos-root 253:0 0 491.6G 0 lvm /
└─centos-swap 253:1 0 7.9G 0 lvm [SWAP]
sr0 11:0 1 1024M 0 rom

Check the type of the file system on your root partition by running:

[root@machine /]# df  -T

We are almost done but we still have to resize the actual filesystem,  otherwise your operating system can still only recognize the old size. I’ll use resize2fs for this:

For an ext4 filesystem, run:

[root@machine /]# resize2fs /dev/centos/root
For an xfs filesystem, you can run:
[root@machine /]# xfs_growfs /

Other important LVM commands:

lvscan :- lists logical volumes currently present in the system

Reference:

http://docs.ganeti.org/ganeti/current/man/gnt-instance.html#grow-disk
https://www.tecmint.com/extend-and-reduce-lvms-in-linux/
https://www.linuxhelp.com/how-to-manage-and-create-lvm-on-centos-7

Python update to tackle remote code vulnerability

The Python Software Foundation (PSF) has rushed out Python 3.9.2 and 3.8.8 to address two notable security flaws, including one that is remotely exploitable but in practical terms can only be used to knock a machine offline.

PSF is urging its legion of Python users to upgrade systems to Python 3.8.8 or 3.9.2, in particular to address the remote code execution (RCE) vulnerability that’s tracked as CVE-2021-3177.

The project expedited the release after receiving unexpected pressure from some users who were concerned over the security flaw.

Python 3.x through to 3.9.1 has a buffer overflow in PyCArg_repr in ctypes/callproc.c, which may lead to remote code execution.

Continue reading “Python update to tackle remote code vulnerability”

Migrating from Zimbra to Zimbra (ZCS to ZCS)

There can be a number of reasons for migrating from one mail server to another, the commonest being “running low on disk storage”. Other reasons may include “the need to having a failover mail server” in case of any catastrophic event that may compromise the active email server, among others.

In other words, this same technique can be used to create a clone of your currently running email server to ensure redundancy.

In order to have virtually zero down time, we will proceed as follows :

  1. Set the DNS TTL Entries pertinent to the mail server to the shortest possible time (Ideally this is done a day before to make sure the ttl propagates accordingly)
  2. Prepare a fully working new server
  3. Import all existing domains from the old server.
  4. Import all existing accounts, passwords, distribution lists, and aliases from the old server
  5. Move all DNS Pointers and firewall port forwards to the new server (or leave the DNS Pointers as they are, and simply swap the servers’ I.P. Addresses old to new, and new to old. (More about this later)
  6. Make sure that new mail is arriving  on the new server.
  7. Make sure users are able to connect and use the new server.
  8. Export Mailbox data from the old server, and import it to the new while the new server is running

1. Preparing the new server

Go ahead and install zimbra on the new server. Make sure to use the same version as that on your old mail server. You can follow the guide here. You will need to setup the new mail server with the same settings as the old server, but with a different IP and domain name.

In case you have more than one domain on your old mail server, create only one main domain on the new mail server as the other domains will be imported automatically during the course of the migration.

Remember that if you intend to install a “letsencrypt” certificate later on (note that this is not covered in this blog), your server name needs to be the same as your http://webmail.domain.com name. It’s commonplace for people to use webmail.domain.com.

Warning : Most of the commands executed during export, and more importantly during import, may take hours. These should be run directly from  a console session. If you have absolutely no choice but to run the commands remotely, make sure you use the “screen” command, so that if the connection gets interrupted, you can connect back to your screen, without disrupting any running scripts.

2. The exportation phase

Before we begin the exportation part, we need  to make sure we have enough storage space, which can be accessed from both “old” and “new” servers. The old server may have some sufficient space enough for the exportation to be done, but in case it does not, the way to go would be  to remote mount an NFS share from the new server (since the new server is technically believed to have sufficient free space), on the old server and use it as the intermediate storage.Other ways include an external usb drive, a network attached storage, etc.

Continue reading “Migrating from Zimbra to Zimbra (ZCS to ZCS)”